will this sure is gonna go well :sarcmark:
it almost feels like when Google+ got shoved into every google product because someone had a bee in their bonnet
flipside, I guess, is that we’ll soon (at scale!) get to start seeing just how far those ideas can and can’t scale
I keep reading the headline as “copulating” and giggling. Sorry.
For your team of developers to deploy at the speed and scale that you need to lead in the market, your developers must be empowered with AI at every step of the software development life cycle, customized and fine-tuned to your codebase.
I feel like I know who the target audience for this post is, and it’s not programmers
the strange thing about copilot is that it is about writing code but github isn’t a code-writing product (besides that text editor they made), it’s a version-control & repository product.
through my deeply-rabbit-holed-product-design-theoretical-perspective lens this translates as another manifestation of trying to make something that is concrete in purpose, which gives concrete accountability to the company behind it, into something general in purpose, which dilutes the concrete acocuntability over time.
“This is a public/private version-controlled repository product”
vs
“This is a collaboration and productivity product”
relatedly, I recently had to work on a project in a language I’ve barely touched before. also happened to notice that apparently I have free copilot “because of my open source contributions” (??? whatever)
so I tried it out
it was one of the usecases people keep telling me it’s ideal for. unfamiliar territory! a new language! something I have little direct experience in but could possibly navigate by using some of my other knowledge and using this to accelerate!
well, it managed to perform just about exactly to my expectations! I didn’t really try hard to take down numbers but I’d say easily 30%+ simple (and possibly subtle?) errors, and suggestions being completely wrong 60%+. the mechanic for the latter is the same bullshit as they push with prompts, “choose the one you like best” > “cycle through the completion suggestions”
observed things like logic inversions and incorrect property references, both of which are things I don’t know whether a learning-to-program person someone using it in the “this is magical! I can just make it type code for me!” sense would be able to catch without some amount of environment tooling or zen debugging (and the latter only if/when they get into the code reading mindset). at multiple points, even when I provided it with extremely detailed prompts about generation, it would just fail to synthesise working biz logic for it
and that was all for just simple syntax and language stuff. I didn’t even try to do things with libraries or such. I’m gonna bet that my previous guesses are also all fairly on point
all in all: underwhelming. I remain promptdubious.
Yeah, that always perplexed me. Copilot is a terrible tool if you’re using it in a domain you aren’t already proficient in, because it will mess up. And even worse, it’ll mess up in subtle ways that most people wouldn’t do themselves. You need to know what you’re doing to babysit it and ensure it doesn’t fuck everything up.
Since I’m using D for most of my coding projects, Copilot is mostly useless. I can do boilerplate stuff with metaprograming way easier and at a way more consistent pace, not to mention it’s easier to modify. And at least ChatGPT, does not know what to do with D attributes, and just randomly puts them everywhere for functions, since it’s way less of an AI than how tech people try to sell it (statistical probability based on Markov chains and some small context windows rather than actually understanding what it does).