Vol. 1, No. 2 — Yes, the light was red. Yes, you were a bit distracted but wow that was a fast yellow. No, it didn’t really endanger anyone. And now, a minute later, it looks like you got away with it. No harm, no foul. Lesson learned for another day.
But wait, what’s that?
Those familiar red and blue lights, approaching with some urgency from behind. You know the routine: pull over, engine off, dome light on, and hands on the wheel. Some things never change.
The officer approaches your window and asks your name. As soon as you respond, you can just make out a tiny green light as it briefly flashes behind her sunglasses. “Is this your vehicle?” “Is it insured?” “Where are you going?” Each time a little green flash after your answer.
But then, the formalities are over. “Do you know why I stopped you?” You hesitate, unsure if she actually saw the infraction. An orange glow starts behind the glasses and she slowly reaches up to squeeze a button on the brim of her hat, never breaking eye contact. “Did you run a red light?” A small chime sounds and she moves on to the next question before you can answer. And the next. And the next. Each more specific than the last.
Then with a bit of whiplash, she thanks you for your time and is gone, back in her car and driving off. As the patrol car disappears around the corner your bank notifies you of a $215 transfer to the local authorities: THANK YOU FOR YOUR COOPERATION.
Humankind has been able to read minds since 1875. Only rabbit, monkey, and dog brains to start with but then in 1924 the first human mind reading was performed with the invention of the electroencephalograph or EEG. You’ve probably seen one in movies: a hairnet full of wires glued to your scalp with an umbilical cord attached to a briefcase-sized device. As it operates, the electrical signals from your brain are recorded onto a roll of paper.
So can we deduce the thoughts of the person using an EEG? No. Not really. But wait, aren’t EEGs lie detectors? I thought so too but, no. Polygraph machines look similar but don’t read electrical signals from your brain. They instead measure your pulse, respiration rate, blood pressure, and perspiration. Telling lies should cause discomfort and a polygraph is designed to track that discomfort.
The fictional situation above is about withholding information, lie detection, and ultimately genuine mind reading all being accomplished without any physical contact. How much if this is already possible?
Digital cameras have advanced so rapidly in the past decade that completely new uses for them are being discovered. Reading someone’s pulse and respiration rate are two of them. Modern camera sensors can deliver such high fidelity images that it’s possible to accurately track fluctuations in a person’s skin color as their blood circulates. Impossible for the naked eye, these fluctuations previously went undetected. And while we may be able to see someone’s silhouette change as they breathe in or out, it becomes impossible if they’re moving. Not so for a computer. It can estimate someone’s pose and chest shape many times per second. Combine all of this with a forward-looking infrared camera (FLIR) and you get skin temperature as well.
We’re mostly there for our lie detecting Judge Dredd scenario. What about true mind reading? Within the last year, there have been two major breakthroughs in this area. When they were announced, the artificial intelligence and neuroscience communities collectively freaked out. Let’s see if there’s reason to.
In late 2022, a research group from Osaka, Japan published a paper that described how they had reconstructed “visual experiences” from human brain activity. Participants in their study looked at images while hooked up to an fMRI (functional magnetic resonance imaging) scanner. These scanners work by tracking blood flow changes in the brain. Having more blood in an area means it has higher activity than neighboring areas. The scanner data was then fed back through an artificial intelligence model they had developed which generates an image trying to guess what the person saw.
Their results are astounding. The original vs recreated comparisons are jaw-droppingly accurate. And these aren’t the typical Zener cards used in telepathy experiments (think black and white shapes and wavy lines from that famous Bill Murray scene in Ghostbusters). No, these are color photographs of real things like a posed teddy bear, an airplane taking off, a snowboarder doing a trick, etc.
Human views image, we scan their brain, we ask an AI what the human saw, it generates a color image. Incredible.
The other Earth shaker happened in May of this year. In a process similar to the Osaka visual group, a team in Austin, Texas developed a method to turn brain scan data into speech. And unlike the Japanese imaging group, it wasn’t limited to a single word or sentence that the participant had heard. It managed to create continuous speech for extended periods of time. In addition to recreating the words a person was hearing, it also worked on original imagined speech. When the subjects were recorded as they watched silent films, it even generated a logical narration for what was happening on screen. The research group believes they’ve stumbled on something deeper than language which is hidden in our brains.
A couple of gotchas and a reassurance. These most recent breakthroughs require an MRI machine which is room-sized and not very budget-friendly. Work is being pursued to achieve these results with more portable equipment such as fNIRS (functional near-infrared spectroscopy) but has so far been unsuccessful. It simply can’t measure deep enough into the brain. Also, they have only been tested on a limited population. Perhaps their results are not as effective as we think. Oh and in case you’re worried: participation is always required. Attempting to read a subject’s mind against their will failed spectacularly.
Now Webster defines telepathy as “communication from one mind to another by extrasensory means.” That doesn’t necessarily mean paranormal woo woo charlatans. We may not be able to directly sense someone else’s brainwaves but they’re there and we’re starting to accurately interpret what they mean. As Arthur C. Clarke so famously wrote: “Any sufficiently advanced technology is indistinguishable from magic.”
Imagine a slightly different scenario than our opener: you running the red light has caused an accident and the victim is on their way to the hospital aware but unable to speak. The EMT can ask “Can you hear me?” “Do you want me to call someone?” “That’s your mom?” “What is her number?” The victim can communicate. They can be reassured.
Extend that for all neurodegenerative diseases. All stroke victims. Anyone with speech difficulties. The medical applications alone convince me of the value of this kind of research. The future is not inherently dystopian or authoritarian. To me, it will be magical.