130 points

*bad Devs

Always look on the official repository. Not just to see if it exists, but also to make sure it isn’t a fake/malicious one

permalink
report
reply
94 points
*

*bad Devs

Or devs who don’t give a shit. Most places have a lot of people who don’t give a shit because the company does not give a shit about them either.

permalink
report
parent
reply
40 points

What’s the diff between a bad dev and a dev that doesn’t care? Either way, whether ist lack of skill or care, a bad dev is a bad dev at the end of the day.

permalink
report
parent
reply
29 points

I can be good at a trade, but if I’m working for a shit company with shit pay and shit treatment, they’re not going to get my best work.

You get out what you put in, that’s something employers don’t realise.

permalink
report
parent
reply
17 points

The difference is whether the fault for the leak of your personal data rests with the worker who was incompetent, or the employer who didn’t pay for proper secure software.

permalink
report
parent
reply
26 points

You’d be surprised how well someone who wants to can camouflage their package to look legit.

permalink
report
parent
reply
7 points

True. You can’t always be 100% sure. But a quick check for download counts/version count can help. And while searching for it in the repo, you can see other similarly named packages and prevent getting hit by a typo squatter.

Despite, it’s not just for security. What if the package you’re installing has a big banner in the readme that says “Deprecated and full of security issues”? It’s not a bad package per say, but still something you need to know

permalink
report
parent
reply
2 points

Yeah, I’m confused on what the intent of the comment was. Apart from a code review, I don’t understand how someone would be able to tell that a package is fake. Unless they are grabbing it from a. Place with reviews/comments to warn them off.

permalink
report
parent
reply
1 point

the first most obvious sign is multiple indentical packages, appearing to be the same thing, with weird stats and figures.

And possibly weird sizes. Usually people don’t try hard on package managing software, unless it’s an OS for some reason.

permalink
report
parent
reply
1 point

That’s what my ex wife used to say

permalink
report
parent
reply
1 point
*

we just experienced this with LZMA on debian according to recent reports. 2 years of either manufactured dev history, or one very, very weird episode.

permalink
report
parent
reply
17 points

The official repositories often have no useful oversight either. At least once a year, you’ll hear about a malicious package in npm or PyPI getting widespread enough to cause real havoc. Typosquatting runs rampant, and formerly reputable packages end up in the hands of scammers when their original devs try to find someone to hand them over to.

permalink
report
parent
reply
51 points

Can we fucking stop anthropomorphising software?

permalink
report
reply
76 points

“Hallucinate” is the standard term used to explain the GenAI models coming up with untrue statements

permalink
report
parent
reply
24 points
*

in terms of communication utility, it’s also a very accurate term.

when WE hallucinate, it’s because our internal predictive models are flying off the rails filling in the blanks based on assumptions rather than referencing concrete sensory information and generating results that conflict with reality.

when AIs hallucinate, it’s due to its predictive model generating results that do not align with reality because it instead flew off the rails presuming what was calculated to be likely to exist rather than referencing positively certain information.

it’s the same song, but played on a different instrument.

permalink
report
parent
reply
5 points

when WE hallucinate, it’s because our internal predictive models are flying off the rails filling in the blanks based on assumptions rather than referencing concrete sensory information and generating results that conflict with reality.

Is it really? You make it sound like this is a proven fact.

permalink
report
parent
reply
2 points

I think a more accurate term would be confabulate based on your explanation.

permalink
report
parent
reply
-12 points

They don’t come up with any statements, they generate data extrapolating other data.

permalink
report
parent
reply
16 points

So just like human brains?

permalink
report
parent
reply
-38 points

What standard is that? I’d like a reference.

permalink
report
parent
reply
12 points

No?

An anthropomorphic model of the software, wherein you can articulate things like “the software is making up packages”, or “the software mistakenly thinks these packages ought to exist”, is the right level of abstraction for usefully reasoning about software like this. Using that model, you can make predictions about what will happen when you run the software, and you can take actions that will lead to the outcomes you want occurring more often when you run the software.

If you try to explain what is going on without these concepts, you’re left saying something like “the wrong token is being sampled because the probability of the right one is too low because of several thousand neural network weights being slightly off of where they would have to be to make the right one come out consistently”. Which is true, but not useful.

The anthropomorphic approach suggests stuff like “yell at the software in all caps to only use python packages that really exist”, and that sort of approach has been found to be effective in practice.

permalink
report
parent
reply
2 points

Woops sorry mate, too late.

permalink
report
parent
reply
0 points

Here is an alternative Piped link(s):

too late

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I’m open-source; check me out at GitHub.

permalink
report
parent
reply
39 points

I just want an LLM with a reasonable context window so we can actually write real working packages with it.

The demos look great, but it’s always just around 100 lines of code, which is beginner level. The only use case right now is fake packages.

permalink
report
reply
12 points
*

Just use the AI Horde. iirc our standard is like 4K context and some people host up to 8K. Here’s a frontend

permalink
report
parent
reply

8k context is nothing.

permalink
report
parent
reply
3 points

Claude is 200k

permalink
report
parent
reply
5 points

those are tokens not lines of code…

permalink
report
parent
reply
6 points

??? The top lvl commenter wants an LLM with big context window and the other commenter responded with an LLM which has 200k token context window which is waaaaaay more than “100 lines of code”.

permalink
report
parent
reply
2 points

Yeah sorry, I thought that was clear. It’s how context is measured.

permalink
report
parent
reply
2 points

I use it for writing functions a lot, tell it the inputs and desired outputs it’ll normally make what i want. Recently gpt has got good at continuing where it left off too.

permalink
report
parent
reply
1 point

I’m using Codeium for that. Works pretty well as a glorified autocomplete, but not much more. Certainly saves a lot of typing though, but I have to double-check everything it produces, because sometimes it adds subtle errors.

permalink
report
parent
reply
1 point

I have tried the copilot integration in edge out of curiosity, and if you feed the ai the context of the page the response can be useful. There is a catch, tho:

  • when opening a document the accepted formats are html, txt, pdf. The documentation of a software package can be summarized but thr source will be the context of the page and not a web search, which is good in this casr

  • when generating new information, the model can be far too sintethic, cutting out potentially useful informations.

I still think you need to read the documentation yourself, maybe using the AI integration only when you need a general idea of the document.

What I do is first reading the summary of the documebt by bullet point, than reading the pdf file as a whole. By the time I do so, the LLM has given enough of a structure to facilitate my readings…

permalink
report
parent
reply
1 point

I’m not particularly interested. Some on my team are playing with it, but I honestly don’t see much point since they spend more time fixing the generated code than they would writing it.

And I don’t think it’ll ever really work well (in the near-ish future) for the most common type of dev work: fixing bugs and making small changes to existing code.

It would be awesome if there was some kind of super linter instead. I spend far more time reading code than writing it, so if it can catch bugs, that would be interesting.

permalink
report
parent
reply
1 point

In my experience with Codeium, it sometimes works ok for three or four lines of code at once. I’ve actually had a few surprises where it nailed what I was going for where I didn’t expect it. But most of the time, it’s just duplicating code from elsewhere in the same file, which usually doesn’t make sense.

It’s also pretty good for stuff where I’d usually build some exotic regex to search/replace (or do it manually, because it’d take longer to come up with the expression), like transforming an enum into a switch construct for its members, or mapping said enum to a string of the member’s name.

This is very far from taking over my job, though. I’d love to be more of a conductor than the guy playing all instruments in the orchestra at once.

permalink
report
parent
reply
1 point

To each their own of course. It just seems like the productivity gains are perceptual, not actual.

For an enum to a switch, I just copy the enum values and run a regex on those copied lines. Both would take me <30s, so it’s a wash. That specific one would be trivial with most IDEs as well, just type “switch (variable) {” and it could autocomplete an exhaustive switch, all without LLMs.

Then again, I’m pretty old school. I still use vim as my editor (with language server plugins), and I’m really comfortable with those kinds of common tasks. I’m only going to bother learning to use the LLM if it’s really going to help (e.g. automate writing good unit tests).

permalink
report
parent
reply
26 points

One of the first things I noticed when I asked ChatGPT to write some terraform for me a year ago was that it uses modules that don’t exist.

permalink
report
reply
11 points

The same goes for Ruby. It just totally made up language features and gems that seemed to actually be from Python.

permalink
report
parent
reply
6 points

Not that it’s a programming language, but it also makes up rules for 5e D&D if you ask to play a game.

permalink
report
parent
reply
3 points

They really are just like us.

permalink
report
parent
reply
2 points

Could you give an example? I really want to know what silly rukes it came up with.

permalink
report
parent
reply
3 points

It seems to shortcut implementations that require more than one block, and mimicks parameters from other functions.

permalink
report
parent
reply
6 points

I have this problem with ChatGPT and Powershell. It keeps referencing functions that do not exist inside of modules and when I’m like “that function doesn’t exist” its like “try reinstalling the module” and then I do and the function still isn’t there so I ask it for maybe another way to do it, and it just goes back to the first suggestion and it goes around in circles. ChatGPT works great sometimes, but honestly I still have more success with stack overflow

permalink
report
parent
reply
8 points

Yeah, had that on my very first attempt at using it.

It used a component that didn’t exist. I called it out and it went “you are correct, that was removed in <older version>. Try this instead.” and created an entirely new set of bogus components and functions. This cycle continued until I gave up. It knows what code looks like, and what the excuses look like and that’s about it. There’s zero understanding.

It’s probably great if you’re doing some common homework (Javascript Fibonacci sequence or something) or menial task, but for anything that might reach the edges of its “knowledge”, it has no idea where those edges may lie so just bullshits.

permalink
report
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 12K

    Posts

  • 555K

    Comments