I do understand why so many people, especially creative folks, are worried about AI and how it’s used. The future is quite unknown, and things are changing very rapidly, at a pace that can feel out…
Statistical analysis of existing literary works is certainly not the same sort of thing as generating new literary works based on models trained on old ones.
Almost all of the people who are fearful that AI is going to plagiarize their work don’t know the difference between statistical analysis and generative artificial intelligence. They’re both AI, and unfortunately in those circles it seems anything even AI-related is automatically bad without any further thought.
I wouldn’t characterize statistical analysis as “AI”, but sadly I do see people (like those authors) totally missing the differences.
I’m generally hesitant about AI stuff (particularly with the constant “full steam ahead, ‘disrupt’ everything!” mindset that is far too prevalent in certain tech spheres), but what I saw described in this article looks really, really cool. The one bit I’m hesitant about is where actual pages are presented (since that is actually presenting a segment of the text), but other than that it’s really sad to see this project killed by a massive misunderstanding.
There’s a subset of artificial intelligence called unsupervised learning which is a form of statistical analysis in which you let an agent find patterns in data for you, as opposed to trying to drive the agent to a desired outcome. I’m not 100% sure that is what the website author was using, but it sounded pretty close to it. It’s extremely powerful and not anything like the generative LLMs most people now think of when the words AI are thrown around.
I agree though, it sucks project got killed it seemed super interesting and insightful.
This will only fuck them in the ass later on.
It’s like watching the MPAA try to oppose Napster at all costs instead of realizing that’s where things are inevitably headed and building something better that they have a piece of.
Except now instead of a multi billion dollar trade organization ceding the future to others it’s a bunch of individuals who generally don’t understand the technology beyond their fear of it (in many cases as a result of their own efforts in writing fearful things about it for decades before it arrived) shooting themselves in the foot while organizing their outrage on social media, and ironically in so doing ensuring that they will not have their own place in the future.
From board room mistakes to bored zoomer mistakes.
darn, this is kinda sad. This is like research on existing works, rather than generating new ones and potentially exploiting them without attribution. It’s like another way of consuming and interpreting the content, much like how we read/watch books/movies and interpret them. We really are moving too quickly and it’s hard to have these conversations in a meaningful way.
I’m gonna need a list of them so I can not buy their books plz.
Really good read, idk what the downvotes are about.