The ubiquity of audio commutation technologies, particularly telephone, radio, and TV, have had a significant affect on language. They further spread English around the world making it more accessible and more necessary for lower social and economic classes, they led to the blending of dialects and the death of some smaller regional dialects. They enabled the rapid adoption of new words and concepts.

How will LLMs affect language? Will they further cement English as the world’s dominant language or lead to the adoption of a new lingua franca? Will they be able to adapt to differences in dialects or will they force us to further consolidate how we speak? What about programming languages? Will the model best able to generate usable code determine what language or languages will be used in the future? Thoughts and beliefs generally follow language, at least on the social scale, how will LLM’s affects on language affect how we think and act? What we believe?

-11 points

None, because it’s a dead end and will burn out in the next few years.

permalink
report
reply
9 points

is this one of those “tech bros bad” takes?

permalink
report
parent
reply
1 point

Seeing as I’m very much a tech bro, no, hah.

permalink
report
parent
reply
1 point

That’s certainly possible. LLMs led to these questions for me but I think they will equally apply to any communicative AI that sees wide adoption.

permalink
report
parent
reply
2 points

Do you actually believe this?

LLMs are the opposite of a dead end. More like the opening of a pipe. It’s not that they will burn out, it’s just that they’ll reach a point that they’re just one function of a more complete AI perhaps.

At the very least they tackle a very difficult problem, of communication between human and machine. Their purpose is that. We have to tell machines what to do, when to do it, and how to do it. With such precision that there is no room for error. LLMs are not tools to prove truth, or anything.

If you ask an LLM a question, and it gives you a response that indicates it has understood your question correctly, and you are able to understand its response that far, then the LLM has done it’s job, regardless of if the answer is correct.

Validating the facts of the response is another function again, which would employ LLMs as a translation tool.

It’s not a long leap from there to a language translation tool between humans, where an AI is an interpreter. deepl on roids.

permalink
report
parent
reply
3 points

My belief is that LLMs are a dead end that will eventually burn out, but because they’ll be replaced with better models. In other words machine text generation will outlive them, and OP’s concerns are mostly regarding machine text generation, not that specific technology.

permalink
report
parent
reply
3 points
*

Do you actually believe this?

Yes. I’m also very happy to be proven wrong in the years to come.

If you ask an LLM a question, and it gives you a response that indicates it has understood your question correctly, and you are able to understand its response that far, then the LLM has done it’s job, regardless of if the answer is correct

I don’t want to get too philosophical here, but you cannot detach understanding / comprehension from the accuracy of the reply, given how LLMs work.

An LLM, through its training data, establishes what an answer looks like based on similarity to what it’s been taught.

I’m simplifying here, but it’s like an actor in a medical drama. The actor is given a script that they repeat, that doesn’t mean they are a doctor. After a while the actor may be able to point out an inconsistency in the script because they remember that last time a character had X they needed Y. That doesn’t mean they are right, or wrong, nor does it make them a doctor, but they sound like they are.

This is the fundamental problem with LLMs. They don’t understand, and in generating replies they just repeat. It’s a step forward on what came before, that’s definitely true, but repetition is a dead end because it doesn’t hold up to application or interrogation.

The human-machine interface part, of being able to process natural language requests and then handing off those requests to other systems, operating in different ways, is the most likely evolution of LLMs. But generating the output themselves is where it will fail.

permalink
report
parent
reply
2 points

So I feel like we agree here. LLMs are a step to solving a low level human problem, i just don’t see that as a dead end… If we don’t take the steps, we’re still in the oceans. We’re also learning a lot in the process ourselves, and that experience will carry on.

I appreciate your analogy, I am well aware LLMs are just clever recursive conditional queries with big semi self-updating datasets.

Regardless of whether or not something replaces LLMs in the future, the data and processing that’s gone into that data, will likely be used along with the lessons were learning now. I think they’re a solid investment from any angle.

permalink
report
parent
reply
11 points

I will share a journal entry from when I was mulling this over last December. Interested in your thoughts:

In old media, such as books and movies, we passively receive the media. We hear stories of heroes, songs about how the singer feels, written thoughts from inside another writer’s mind. These are valuable because of how we connect with others and thereby grow.

Interactive media, e.g. video games, allow us to tinker with a story and interrogate our relationship and attitude towards the ideas and themes thereby. We pull a lever, and the story changes direction. Video games have become such a large industry thanks to the more profound personal connection we can develop with the art through prescribed mechanical interactions. We press the buttons, and become the hero.

With the advent of artificial intelligence, it won’t be long before someone invents a new form of storytelling predicated on this technology. While we used to read stories, it now becomes possible for stories to be read into us. An AI can now be created that observes your life, and makes sense of it in a profound larger context.

This new media would be an AI companion who acts as a fourth wall of your life; layering your struggles and triumphs within a larger context, lightly editorializing, adding soundtracks that seamlessly portray your energy and emotional state (or humorously juxtapose it), adding humorous asides or callbacks that keep you in the moment, gently reminding and prompting next activities, reflecting on failures or calling attention to bad habits one is trying to break, and generally contriving to elevate the daily experience to the level of storytelling. It would give life an enhanced sense of meaningful examination, refining our sense of self and bringing our life into focus. This is a form of media that is not itself passively received, but actively treats your life as a fully interactive lived experience.

Art is integral to our ability to relate to others, experience things that are larger than ourselves, and to create meaning. This “fourth wall” AI would be a new form of media that seeks to amplify our understanding of ourselves, integrating our egos with our life as it exists as we change and grow throughout life.

The risks posed by malfeasant propagation of such a medium are at once beyond imagining and entirely predictable; the manufacturing of consent, the corrupting influence of profit motives, and the use of media as a social control mechanism are all pre-21st century concepts in media.

Whether a “fourth wall AI” represents a new threat or merely a quantum leap in the scale of preexisting threats cannot be known in advance. All of the above is to merely assert that we will see, and that such a medium could theoretically be used as art in the true sense, if such technology can be put in the hands of artists, and not just corporations.

permalink
report
reply
5 points

This is both terrifying and fascinating at the same time. The potential in either direction is immense, imagine having a soundtrack to daily life tailored to what is happening? Would you hear boss music if you mess up at work/school/etc?

How is this viewed by the user is a good question as well. If it’s broadcast on a speaker/projector that everyone else can see what the ai is showing us as well. If it’s only viewable to us by either implants or some sort of smart glass technology then it’s “private” to the user.

Like you mention, the potential abuse of this system is unimaginable. Ads shown directly in our vision, a paid tier that is ad free. Music is shown to be able to effect emotional state easily (movies as an example), what if the soundtrack is used to emphasize certain goals. The ai pushes you to buy a certain car brand over another because said car brand paid the ai company more.

permalink
report
parent
reply
5 points

As McLuhan said, the medium is the message. If this is a story AI is telling about you to you, then it would probably best be a purely private experience happening in headphones or around the house. If this is a story where you are woven into the world as a character in other peoples’ lives, it should be happening around you. In cinematic storytelling we talk about whether something is “diegetic”; is the soundtrack coming from something in the world, or is it something only the audience can perceive as part of a constructed experience? If the goal of a “fourth wall AI” (should be in my opinion) to make your perception of yourself more fully unified with the world around you, I’d advocate for these things to be realized ‘diegetically’, so that multiple fourth-wall AI would have to work together to create harmony in the reality they construct for their audiences. I think on a more sociological level, I am worried about how much we each feel separate and different from one another in society. I think that having the fourth wall AI be strictly a public phenomenon would be a better choice not just from an artistic perspective, but from a sense of its potential for reinforcing our social fabric.

permalink
report
parent
reply
4 points

This is a really interesting, thoughtful comment (and exactly why I love lemmy).

I don’t know if it’s just my lack of imagination, but I find your description of AI pet/companion as an art/media object much more plausible and interesting rather than when people discuss their possible sentience. It really doesn’t seem to many steps from Spotify’s discovery weekly playlist or Google Assistant reading all my emails, when combined with LLM capacity to plausibly bullshit, to having a ‘virtual friend’ who texts me jokes, questions and what not.

Especially since we’ve both normalised interacting with humans in entirely digital ways & created a massive corpus of how humans interact via social media archives. Why do I want a calendar app pinging me a notification when I could have a virtual companion message me “I hope your haircut goes well this afternoon, looking forward to seeing your new look!” or “don’t fucking forget your appointment again you dumbass” depending on what companion I purchased.

Given many people’s preference to “get everything in one place”, it seems likely that instead of newsletters, comedy subs or travel updates, we’ll just have different imaginary friends sending us the stuff we need/want to know, mixed in with our actual friends. Some of whom might as well be virtual since we never see them in the flesh.

permalink
report
parent
reply
2 points

I don’t even truly understand my own sentience - how can we ask a machine to replicate something we do not understand? It would be like throwing rocks into a dark pool, and hoping something friendly crawls out of the water.

permalink
report
parent
reply
35 points

It’ll be interesting to see how it affects the average person’s written communication. When we know technology can handle something for us, our brains seem to let it carry the load. Think of all the people who aren’t great communicators or might not be confident in their English who would love to rely on this already.

I guess it’s a matter of perspective whether you view it as a crutch or a boon, which I’m sure has been a conversation about many pieces of technology over the years:

People were better at remembering phone numbers before cell phones stored them. People were better at remembering how to spell words before spell check/autocorrect. People were better at writing by hand before typewriters/keyboards. etc

permalink
report
reply
4 points
*

Damn. I hadn’t made that connection yet, that’s actually quite worrying.

If reliance on LLMs does begin to affect language skills negatively, it could become a significant problem. Let alone the political implications, I believe that people are more capable of navigating personal relationships when they have a strong command of language.

permalink
report
parent
reply
15 points

People who rely too heavily on autocorrect do already now cause misunderstandings by writing something they did not intend to.

I had a friend during uni who was dyslectic, and while the words in his messages were written proper you still had to guess the context from the randomly thrown together words he presented you with.

Now that we can correct not only a single word or roughly the structure of a sentence, but instead fabricate whole paragraphs and articles by providing a single sentence, I imagine we will see a stark increase in low-quality content, accidental false information, and easily preventable misunderstandings - More than we already have.

permalink
report
parent
reply
3 points

Each generation thinks they had it the right way and younger ones have it easy. You can go back centuries with people pushing each other down.

What should be encouraged is the exchange of ideas and healthy debate. Words are just a tool for that, and spelling and grammar and " not knowing Latin" are components of it.

A couple generations down the road we would be able to accurately transmit our thoughts to other people and calibrate for their culture and growing up biases, and the generation immediately before it will whine when LLM was the right way to communicate.

permalink
report
parent
reply
8 points

Eh, LLMs do have a significant problem in how they can generate false information by themselves. Every other tool prior requires a person to make said false information, but LLMs can just generate it when asked a question

permalink
report
parent
reply
2 points

So what’s your point, should we trust machines less than the unhinged uncle at Thanksgiving?

permalink
report
parent
reply
16 points

Maybe they’ll help people sort out the difference between “affect” and “effect”.

permalink
report
reply
10 points

On the contrary, AI will have been trained on so much bad grammar that it will become even more ingrained into society.

permalink
report
parent
reply
4 points
*

Idk about that, I haven’t noticed any spelling mistakes so far (except that one post where they asked GPT3.5 to list the steps in counting the A’s in “mayonnaise” and counted like 4 of them)

permalink
report
parent
reply
2 points

Something I’m worried about is how the diversity of language is going to be affected. Nowadays we have AIs that tell us how we should be writing things, such as grammarly. I fear that most language, especially the more “formal”, will devolve to basically what “grammarly says is correct”. With the spread of platforms using AI to moderate their content, I worry the way people will talk will also be influenced by trying to appease an algorithm. You can kind of see it with Youtube, where people have to avoid using certain terms otherwise Youtube will steal their income. There’s a beauty in diversity, it’d be a shame to have that erased and people forced to all be the same sterile person.

What about programming languages?

Programming languages are fundamentally different to spoken languages. If AI develops to a point where it can be used to control a computer, I’d say that’s not programming unless you describe what you want extremely precisely (in which case, using an AI to muddy it up isn’t helpful).

Thoughts and beliefs generally follow language

Isn’t this actually a myth? There’s value in giving names to things, but those names don’t change how people feel or what they believe in. I’d imagine most people sometimes feel emotions that “they can’t quite put a finger on” or their political beliefs don’t really align up with a manifesto.

permalink
report
reply

Ask Lemmy

!asklemmy@lemmy.world

Create post

A Fediverse community for open-ended, thought provoking questions

Please don’t post about US Politics. If you need to do this, try !politicaldiscussion@lemmy.world


Rules: (interactive)


1) Be nice and; have fun

Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can’t say something nice, don’t say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'

This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spam

Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reason

Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.

It is not a place for ‘how do I?’, type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


Community stats

  • 11K

    Monthly active users

  • 4.3K

    Posts

  • 233K

    Comments