You are viewing a single thread.
View all comments View context
8 points
*

I noticed those language models don’t work well for articles with dense information and complex sentence structure. Sometimes they forget the most important point.

They are useful as a TLDR but shouldn’t be taken as fact, at least not yet and for the foreseeable future.

A bit off topic, but I’ve read a comment in another community where someone asked chatgpt something and confidently posted the answer. Problem: the answer is wrong. That’s why it’s so important to mark AI LLM generated texts (which the TLDR bots do).

permalink
report
parent
reply
5 points

Not calling ML and LLM “AI” would also help. (I went offtopic even more)

permalink
report
parent
reply
3 points

I think the Internet would benefit a lot, if peope would mark their Informations with sources!

  • source my brain
permalink
report
parent
reply
2 points

Yeah that’s right. Having to post sources rules out usage of LLMs for the most part, since most of them do a terrible job at providing them - even if the information is correct for once.

permalink
report
parent
reply

Open Source

!opensource@lemmy.ml

Create post

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

  • Posts must be relevant to the open source ideology
  • No NSFW content
  • No hate speech, bigotry, etc

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

Community stats

  • 5.3K

    Monthly active users

  • 1.7K

    Posts

  • 29K

    Comments