The AI boom is screwing over Gen Z | ChatGPT is commandeering the mundane tasks that young employees have relied on to advance their careers.::ChatGPT is commandeering the tasks that young employees rely on to advance their careers. That’s going to crush Gen Z’s career path.
So what would that mean for the company itself long-term? If they’re not training up their employees, and most of the entry level is replaced by text generator work, there would be a hole as executives and managers move out of the company.
It seems like it would be a recipe for the company to implode after a few years/decades, assuming that that the managerial/executive positions aren’t replaced also.
there would be a hole as executives and managers move out of the company.
And why would those executives and managers care about that? They just need to make sure they time their departures to be early enough that those holes don’t impact the share prices. Welcome to modern capitalism where the C suites only goal is to make sure they deploy their golden parachute while the company still has enough cash left over to pay them.
Yeah, it should be obvious by now after 3 decades of 1980s-MBA style corporate management (and a Financial Crash that happenned exactly along those lines) that “the bonus comes now, the problems come after I’ve moved on” measures will always get a go-ahead from the suits at the top floor.
It’s a Tragedy Of The Commons situation: each market actor expects to get the benefits of automating away entry level jobs and expects it’s going to be somebody else who keeps on training people through their junior career years so that mid and senior level professionals are available in the job market.
Since most market actors have those expectations and even those who don’t are pressured by market pressures to do the same (as paying for junior positions makes them less competitive than those who automate that work, so they’re forced to do the same), the tragedy part will eventually ensue once that “field” has kept being overgrazed for long enough.
Why would you want to train people to do it wrong? If you had to train someone tomorrow would you show them the email client or give them a cart and have them deliver memos for a week?
Right now we have handed over some more basic tasks to machines. Train the next generation to take those tasks being automated as a given.
It’s not the tasks that matter, it’s the understanding of the basics, the implications of certain choices and the real life experience in things like “how long I thought it would take vs how long it actually took” that comes with doing certain things from start to end.
Some stuff can’t be learned theoretically, it has to be learnt as painful real life lessons.
So far there seems to be an ill-defined boundary between what AI can successfully do and what it can’t, and sadly you can’t really teach people starting past that point because it’s not even a point, it’s an area where you have to already know enough to spot the AI-made stuff that won’t work, and the understanding there and beyond is built on foundational understanding of how to use simpler building blocks and what are the implications of that.
And?
It it makes you feel better the alternative can be much worse:
People are promoted to their level of incompetence. Sure she is a terrible manager but she was the best at sales and is most senior. Let’s have her check to make sure everyone filled out expense reports instead of selling.
You don’t get the knowledge sharing that comes from people moving around. The rival spent a decade of painful trial and error to settle on a new approach, but you have no idea so you are going to reinvent this wheel.
People who do well on open-ended creative tasks are not able to do as they failed to rise above repetitive procedural tasks. Getting started in the mailroom sounds romantic but maybe not the best place to learn tax law.
The tech and corporate and general operational knowledge drifts further and further away from the rest of the industry. Eventually everyone is on ancient IT systems that sap (yes pun intended) efficiency. Parts and software break that it is hard to replace. And eventually the very systems that were meant to make things easier become burdens.
For us humans there really is no alternative to work and thinking is the hardest work of all. You need to consistently reevaluate what the situation calls for and any kinda rigid rule system of promotion and internal training won’t perform as well.
I think those in charge often don’t care. A lot of them don’t actually have any incentive for long term performance. They just need a short/medium term stock performance and later they can sell. Heck, they’ll even get cash bonuses based solely on short term performance. Many C-secs aren’t in for the long haul. They’ll stay for maybe 5-10 years tops and then switch jobs, possibly when they see the writing on the wall.
Even the owners are often hoping to just survive until some bigger company buys their business.
And when the company does explode… They’ll just declare bankruptcy and later make a new company. The kinds of people who created companies rarely do it just once. They do it over and over, somehow managing to convince investors every time.
Bullshit. Learn how to train new hires to do useful work instead of mundane bloat.
100% if an AI can do the job just as well (or better) then there’s no reason we should be making a person do it.
Part of the problem with AI is that it requires significant skill to understand where AI goes wrong.
As a basic example, get a language model like ChatGPT to edit writing. It can go very wrong, removing the wrong words, changing the tone, and making mistakes that an unlearned person does not understand. I’ve had foreign students use AI to write letters or responses and often the tone is all off. That’s one thing but the student doesn’t understand that they’ve written a weird letter. Same goes with grammar checking.
This sets up a dangerous scenario where, to diagnose the results, you need to already have a deep understanding. This is in contrast to non-AI language checkers that are simpler to understand.
Moreover as you can imagine the danger is that the people who are making decisions about hiring and restructuring may not understand this issue.
The good news is this means many of the jobs AI is “taking” will probably come back when people realize it isn’t actually as good as the hype implied
And AI is not always the best solution. One of my tasks at my job is to respond to website reviews. There is a button I can push that will generate an AI review. I’ve tested it. It works… but it’s not personal. My responses directly address things they say, especially if they have issues. Their responses are things like, “thanks for your five-star review! We really appreciate it, blah blah blah.” Like a full paragraph of boilerplate bullshit that never feels like the review is addressed.
You would think responding to reviews properly would be a very basic function an AI could do as well as a human, but at present, no way.
This.
In accounting, 10 years ago, a huge part of the job was categorising bank transactions according to their description.
Now AI can kinda do it, but even providers that would have many billions of transactions to use as training data have a very high error rate.
It’s very difficult for a junior to look at the output and identify which ones are likely to be incorrect.
They don’t want to train new hires to begin with. A lot of work that new hires relied on to get a foothold on a job is bloat and chores that nobody wants to do. Because they aren’t trusted to take on more responsibility than that yet.
Arguably whole industries exist around work that isn’t strictly necessary. Does anyone feel like telemarketing is work that is truly necessary for society? But it provides employment to a lot of people. There’s much that will need to change for us to dismiss these roles entirely, but people need to eat every day.
Indeed: at least in knowledge based industries, everybody starts by working with a level of responsability were the natural mistakes a learning person does have limited impact.
One of my interns read the wrong voltage and it took me ten minutes to find his mistake. Ten minutes with me and multiple other senior engineers standing around.
I congratulationed him and damn it I meant it. This was the best possible mistake for him to make. Everyone saw him do it, he gets to know he held everything up, and he has to just own it and move on.
The “not willing to train” thing is one of the biggest problems IMO. But also not a new one. It’s rampant in my field of software dev.
Most people coming out of university aren’t very qualified. Most have no understanding of how to actually program real world software, because they’ve only ever done university classes where their environments are usually nice and easy (possibly already setup), projects are super tiny, they can actually read all the code in the project (you cannot do that in real projects – there’s far too much code), and usually problems are kept minimal with no red herrings, unclear legacy code, etc.
Needless to say, most new grads just aren’t that good at programming in a real project. Everyone in the field knows this. As a result, many companies don’t hire new grads. Their advertised “entry level” position is actually more of a mid level position because they don’t want to deal with this painful training period (which takes a lot of their senior devs time!). But it ends up making the field painful to enter. Reddit would constantly have threads from people lamenting that the field must be dying and every time it’s some new grad or junior. IMO it’s because they face this extra barrier. By comparison, senior devs will get daily emails from recruiters asking if they want a job.
It’s very unsustainable.
This is not going to turn out well for a lot of people. Soon human beings will be obsolete in the name of AI
The fucked up part isn’t that AI work is replacing human work, it’s that we’re at a place as a society where this is a problem.
More automation and less humans working should be a good thing, not something to fear.
But that would require some mechanism for redistributing wealth and taking care if those who choose not to work, and everyone knows that’s communism.
So much this. The way headlines like this frame the situation is so ass-backwards it makes my brain hurt. In any sane world, we’d be celebrating the automation of mundane tasks as freeing up time and resources to improve our health, happiness, and quality of life instead of wringing our hands about lost livelihoods.
The correct framing is that the money and profits generated by those mundane tasks are still realized, it’s just that they are no longer going to workers, but funneled straight to the top. People need to get mad as hell not at the tech, but at those who are leveraging that tech to specifically to deny them opportunity rather than improving their life.
I need a beer. 😐
money and profits generated by those mundane tasks are still realized, it’s just that they are no longer going to workers, but funneled straight to the top
Workers should be paid royalties for their contributions. If “the top” is able to reap the rewards indefinitely, so should the folks who built the systems.
I think you misspelled “taxes,” but its possible your spelling will turn out to be more accurate.
Exactly. This has nothing to do with AI and everything to do with UBI.
But, the rich and plebes alike will push AI as the Boogeyman as a distraction from the real enemy.
There’s this bizarre right-wing idea that if everyone can afford basic necessities, they won’t do anything. To which I say, so what? If you want to live in shitty government housing and survive off of food assistance but not do anything all day, fine. Who cares? Plenty of other people want a higher standard of living than that and will have a job to do so. We just won’t have people starving in the street and dying of easily fixable health problems.
We also have to be careful of how people define this sort of thing, and how the wide range of our current wealth inequality affects how something like UBI would be implemented.
In the rich’s eyes, UBI is already a thing and it’s called “welfare”. It’s not enough that people on welfare can barely survive on the poverty-level pittance that the government provides, but both the rich and slightly-more-well-off have to put down these people as “mooching off the system” and “stealing from the government”, pushing for even more Draconian laws that punish their situation even further. It is a caste of people who are portrayed as even lower scum than “the poors”, right down to segregating where they live to “Section 8” housing as a form of control.
UBI is not about re-creating welfare. It’s about providing a comfortable safety net while reducing the obscene wealth gap, as technology drives unemployment even higher. Without careful vigilance, the rich and powerful will use this as another wedge issue to create another class of people to hate (their favorite pastime), and push for driving the program down just as hard as they do for welfare.
It’s not even a new thing either.
It used to be that every single piece of fabric was handmade, every book handwritten.
Humans have been losing out on labor since they realized Og was faster at bashing rocks together than anyone else.
It’s just a question of if we redistribute the workload. Like turning “full time” down to 6 days a week and eventually 5, or working hours from 12+ to 8hrs. Which inflates the amount of jobs to match availability.
Every single time the wealthy say we can’t. But eventually it happens, the longer it takes, the less likely it’s peaceful.
Where are you that 7 days a week 12 hour days is full time? That’s literally just always working. Standard full time in the states is 40 hour work weeks.
But eventually
There’s no eventually, people have been killed, murdered and harassed whilst fighting to make it a reality. Someone has to fight to make it happen and an “eventually” diminishes the value of the effort and risks put forth by labor activists all over the world throughout history. It didn’t happen magically, people worked really hard to make it so.
There are both dystopian (a tiny Elite owns the automatons and gets all gains from their work and a massive unemployed Underclass barelly surviving) and utopian (the machines do almost everything for everybody) outcomes for automation and we’re firmly in the path for Dystopia.
This was exactly the problem that Charles Murray pointed out in the bell curve. We’re rapidly increasing the complexity of the available jobs (and the successful people can output 1000-1,000,000 times more than simple labor in the world of computers). It’s the same concept as the industrial revolution, but to a greater degree.
The problem is that we’re taking away the vast majority of the simple jobs. Even working at a fast food place isn’t simple.
That alienates a good chunk of the population from being able to perform useful work.
That book is shit and should not be cited in any serious discussion. Here’s a good video explaining why the book is full of racist shit: https://youtu.be/UBc7qBS1Ujo
Here is an alternative Piped link(s): https://piped.video/UBc7qBS1Ujo
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
Precisely, the hill to die on is to socialize the profits, not to demand we keep the shitty, injuring, repetitive task jobs that break a person’s back by 35.
You don’t protest street lights to keep the lamp lighters employed. The economy needs to change fundamentally to accommodate the fact that many citizens won’t have jobs yet need income. It won’t change, but it needs to.
So we’ll keep blaming the wrong thing, technology that eases the labor burden on humanity, instead of destroying the wealth class that demands they be the sole beneficiary of said technology and its implementation in perpetuity to the detriment of almost everyone outside the owner class. Because if we did that, we’d be filthy dirty marxist socialist commies that hate freedumb, amirite?!
In an ideal world, people would start receiving better and more fulfilling opportunities when their mundane tasks are automated away. But that’s way too optimistic and the world is way to cynical. What actually happens is they get shitcanned while the capitalists hoard the profits.
We need a better system. One that, instead of relentlessly churning for the impossibility of infinite growth and funneling wealth upwards, prioritizes personal financial stability and enforces economic equallibrium.