Participants read silently while wearing a cap that recorded their brain waves via electroencephalogram (EEG) and decoded them into text.
Was the AI trained on the text that the people were reading?
I’m not sure if this was your intent, but your comment gave me a good giggle as I recalled this article: An AI bot performed insider trading and deceived its users after deciding helping a company was worth the risk.
Not to personify an LLM, but in my (fantastical) imagining, the AI knew the desired outcome, and that complete success was unbelievable. So it fudged things to be 3% improved.
Yikes. Now that I’m overthinking it - that idea is only funny because it’s currently improbable.
… I hope people pleasing is never a consideration for any ‘AI’ that does scientific, engineering, or economic work.