The ubiquity of audio commutation technologies, particularly telephone, radio, and TV, have had a significant affect on language. They further spread English around the world making it more accessible and more necessary for lower social and economic classes, they led to the blending of dialects and the death of some smaller regional dialects. They enabled the rapid adoption of new words and concepts.

How will LLMs affect language? Will they further cement English as the world’s dominant language or lead to the adoption of a new lingua franca? Will they be able to adapt to differences in dialects or will they force us to further consolidate how we speak? What about programming languages? Will the model best able to generate usable code determine what language or languages will be used in the future? Thoughts and beliefs generally follow language, at least on the social scale, how will LLM’s affects on language affect how we think and act? What we believe?

-11 points

None, because it’s a dead end and will burn out in the next few years.

permalink
report
reply
9 points

is this one of those “tech bros bad” takes?

permalink
report
parent
reply
1 point

Seeing as I’m very much a tech bro, no, hah.

permalink
report
parent
reply
1 point

That’s certainly possible. LLMs led to these questions for me but I think they will equally apply to any communicative AI that sees wide adoption.

permalink
report
parent
reply
2 points

Do you actually believe this?

LLMs are the opposite of a dead end. More like the opening of a pipe. It’s not that they will burn out, it’s just that they’ll reach a point that they’re just one function of a more complete AI perhaps.

At the very least they tackle a very difficult problem, of communication between human and machine. Their purpose is that. We have to tell machines what to do, when to do it, and how to do it. With such precision that there is no room for error. LLMs are not tools to prove truth, or anything.

If you ask an LLM a question, and it gives you a response that indicates it has understood your question correctly, and you are able to understand its response that far, then the LLM has done it’s job, regardless of if the answer is correct.

Validating the facts of the response is another function again, which would employ LLMs as a translation tool.

It’s not a long leap from there to a language translation tool between humans, where an AI is an interpreter. deepl on roids.

permalink
report
parent
reply
3 points
*

Do you actually believe this?

Yes. I’m also very happy to be proven wrong in the years to come.

If you ask an LLM a question, and it gives you a response that indicates it has understood your question correctly, and you are able to understand its response that far, then the LLM has done it’s job, regardless of if the answer is correct

I don’t want to get too philosophical here, but you cannot detach understanding / comprehension from the accuracy of the reply, given how LLMs work.

An LLM, through its training data, establishes what an answer looks like based on similarity to what it’s been taught.

I’m simplifying here, but it’s like an actor in a medical drama. The actor is given a script that they repeat, that doesn’t mean they are a doctor. After a while the actor may be able to point out an inconsistency in the script because they remember that last time a character had X they needed Y. That doesn’t mean they are right, or wrong, nor does it make them a doctor, but they sound like they are.

This is the fundamental problem with LLMs. They don’t understand, and in generating replies they just repeat. It’s a step forward on what came before, that’s definitely true, but repetition is a dead end because it doesn’t hold up to application or interrogation.

The human-machine interface part, of being able to process natural language requests and then handing off those requests to other systems, operating in different ways, is the most likely evolution of LLMs. But generating the output themselves is where it will fail.

permalink
report
parent
reply
2 points

So I feel like we agree here. LLMs are a step to solving a low level human problem, i just don’t see that as a dead end… If we don’t take the steps, we’re still in the oceans. We’re also learning a lot in the process ourselves, and that experience will carry on.

I appreciate your analogy, I am well aware LLMs are just clever recursive conditional queries with big semi self-updating datasets.

Regardless of whether or not something replaces LLMs in the future, the data and processing that’s gone into that data, will likely be used along with the lessons were learning now. I think they’re a solid investment from any angle.

permalink
report
parent
reply
3 points

My belief is that LLMs are a dead end that will eventually burn out, but because they’ll be replaced with better models. In other words machine text generation will outlive them, and OP’s concerns are mostly regarding machine text generation, not that specific technology.

permalink
report
parent
reply
0 points

I figure they can either help or harm, depending on implementation:

Huggingface ( I always think of the “face-huggers” in Alien, when I see that name… and have NO idea why they thought that association would be a Good Thing™ ) has a LLM which apparently can do Sanskrit.

Consider, though:

All the Indigenous languages, where we’ve only actually got a partial-record of the language, and the “majority rule, minority extinguishes” “answer” of our normal process … obliterated all native speakers of that language ( partly through things like residential-schools, etc )…

now it becomes possible to have an LLM for that specific language, & to study the language, even though we’ve only got a piece of it.

This is like how we’ve sooo butchered the ecology that we can only study pieces of it, now, there’s simply too-much missing from what was there a few centuries ago, so we’re not looking at the origina/proper thing, either in ecologies or in languages.

sigh

This wasn’t supposed to be depressing.


Consider how search-engines have altered how we have to communicate…

In order to FORCE a search-engine to consider a pair-of-words to be a single-term, you have to remove all intervening space/hyphens/symbols from between them.

ClimatePunctuation is a single search-token, but “Climate Punctuation” is two separate, unrelated terms, which may or may-not appear in the results.

It’s obscene.

I’m almost mad-enough to want legislation forcing search-engines to respect some kind of standard set of defaults ( add more terms == narrowing the search, ie defaulting to Boolean AND, as one example ),

so they’d stop enshittifying our lives while “pretending” that they’re helping.

( there was a Science news site which would not permit narrowing-of-search, and I hope they fscking died.

Making search unusable on a science site??

probably some “charity” who pays most of their annual-budget to their administration, & only exists for their entitlement.

I’m saying that after having encountered that religion in charities. )


Interesting:

search-engines alter our use-of-language,

social-sites do too,

LLM’s do too,

marketing/propaganda does,

astroturfing does,

… it begins looking like real events are … rather-insignificant … influences in our languages?

Hm…

permalink
report
reply
5 points
*

Wait, who says it is the dominant branch of “AI development”? What does that mean, in fact? Who says it was telephone and radio that led to English hegemony? Who said thoughts and beliefs follow language? Most of the people I know in related fields seem to think that’s widely disproved. I mean, no bad questions, but there’s a TON of built-in assumptions in the OP and not all of them check out.

FWIW, I don’t know that generated language gets to change much if it’s generated by inferring likely language from human sources. At most there may be a newfound premium on using original, spontaneous sounding language in writing just to prove one’s humanity by distinguishing from bland, generated language, but I suppose even that depends on how the tech moves forward.

permalink
report
reply
5 points

The word arafed will enter the common lexicon.

permalink
report
reply
3 points

Is that a-rafed or ara-fed?

permalink
report
parent
reply
3 points

Only the LLMs know.

permalink
report
parent
reply
16 points

Maybe they’ll help people sort out the difference between “affect” and “effect”.

permalink
report
reply
10 points

On the contrary, AI will have been trained on so much bad grammar that it will become even more ingrained into society.

permalink
report
parent
reply
4 points
*

Idk about that, I haven’t noticed any spelling mistakes so far (except that one post where they asked GPT3.5 to list the steps in counting the A’s in “mayonnaise” and counted like 4 of them)

permalink
report
parent
reply

Ask Lemmy

!asklemmy@lemmy.world

Create post

A Fediverse community for open-ended, thought provoking questions

Please don’t post about US Politics. If you need to do this, try !politicaldiscussion@lemmy.world


Rules: (interactive)


1) Be nice and; have fun

Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can’t say something nice, don’t say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'

This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spam

Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reason

Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.

It is not a place for ‘how do I?’, type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


Community stats

  • 11K

    Monthly active users

  • 4.3K

    Posts

  • 233K

    Comments