Thread
#generativeAI makes non-invasive, individualised mind-reading possible by turning thoughts into text after extensive individual training (16 hours of listening to podcasts in an fmri) www.theguardian.com/technology/2023/may/01/ai-makes-non-invasive-mind-reading-possible-by-turning-tho...
Also, Google is now advertising real-time translation. Though I'd be very nervous about 'speaking' in a language I didn't know and couldn't organically monitor. Remember: everything digital can be hacked.
TBH I doubt it would be THAT different. Except that any "hallucinations" (errors in prediction) would have more internal coherence / may be more linked to externally-valid, but unintended reference. #gpt4 #mindreading cc @jerryptang @HuthLab @alex_ander


There's been some discussion of applications. Such BMI work is often publicly presented as helping disadvantaged eg those with "locked in" syndrome, but funded by a military looking for faster, easier communication/control between trained humans & weapon systems. 1/
Both are viable applications, though I worry about intervening before speech where speech is possible, since inhibition of bad ideas often works very late in the control chain, including of course correction after self-monitoring (e.g. delete key, "I mean...") 2/
I've heard about people using human (& pigeon!) brains to detect horrific things in pictures BEFORE conscious awareness (& hopefully most damage). Certainty is increased by using multiple people, rather than longer individual consideration. (Pigeons still seem win here though) 3/
I'm both excited, but obviously worried. AI "Smile detection" is already being used by the Chinese in their "reeducation camps." But those camps & associated genocides are the real worry, of which the misuse of probably mostly beneficial tech is just part. 4/
My main concern which goes a lot broader than BMI is that as we use #generativeAI for expression, we more and more directly impose a filter / intermediate of cultural expectations (including knowledge & wisdom, but not limited to that) on everything we say or write. 5/6
I'd say the main finding of my, @aylin_cim & @random_walker's 2017 Science artcl is that ALL use of language already does that. My experience of last week's #generativeAI/#GPT4 meeting is a lot of people are as surprised as we were how much human knowledge is encoded that way./6
I agree. And this goes way beyond this single application too. We are all becoming more knowable and more predictable. The better models we build, the less information about any particular case we need.

#AIEthics bridges security studies & humanities.


Mentions
See All