Advertisement
Help Keep Boards Alive. Support us by going ad free today. See here: https://subscriptions.boards.ie/.
https://www.boards.ie/group/1878-subscribers-forum

Private Group for paid up members of Boards.ie. Join the club.
Hi all, please see this major site announcement: https://www.boards.ie/discussion/2058427594/boards-ie-2026

Will AI take your job?

11718192123

Comments

  • Registered Users, Registered Users 2 Posts: 9,932 ✭✭✭take everything


    It's incredible people are still in denial about what is coming.



  • Registered Users, Registered Users 2 Posts: 6,556 ✭✭✭silliussoddius




  • Registered Users, Registered Users 2 Posts: 1,589 ✭✭✭Emblematic


    You mention hardware needing to be replaced and upgraded. But if this is so due to obsolescence, then isn't this a prediction of the importance of AI rather than an indication of it being an over-hyped flash-in-the-pan? After all, if this AI thing was just a fad, then we would expect to see a glut of compute power in the server racks.



  • Registered Users, Registered Users 2 Posts: 3,898 ✭✭✭snotboogie


    No wonder if you are still using copilot lol. Switch to codex or Claude code



  • Registered Users, Registered Users 2 Posts: 22,530 ✭✭✭✭Tony EH




  • Advertisement
  • Registered Users, Registered Users 2 Posts: 1,037 ✭✭✭BP_RS3813


    Copilot is the one the company wants to use and nothing else is allowed (wouldn't use them anyway but CoPilot is the sh*ttest of them all).



  • Registered Users, Registered Users 2 Posts: 924 ✭✭✭bored65


    They will change their tunes when they get outcompeted by competitors using other tools, for example Cursor and Amazon’s Kiro are getting ridiculously good

    Like I mentioned earlier the danger to people is not AI replacing them but other companies and people using better AI tools and models outcompeting them

    You might want to raise this with management



  • Registered Users, Registered Users 2 Posts: 15,965 ✭✭✭✭briany


    If robots and computers do all the work, then all those unemployed people are no longer really in the system. The system no longer has any duty of care toward them.

    Look at companies right now who use AI to replace workers - do they appear to feel any particular remorse about this? Doesn't seem so. Their shareholders are probably thrilled with the extra profit. So, why would they at some point in the future turn around and generously lend a big chunk of those profits to fund UBI?? Not happening based on any current evidence that I can see. At best, UBI would come would come with some pretty onerous stipulations. People aren't going to be left at a loose end to swan around and just do whatever they fancy. UBI would be very much geared towards increased control in some way rather than a post-work utopia.



  • Registered Users, Registered Users 2 Posts: 16,972 ✭✭✭✭Flinty997


    I use copilot to do pretty useful stuff. I think People don't spend enough time with it and give up too be easily. Other tools are better but that's no use to me if they aren't available in the job.



  • Registered Users, Registered Users 2 Posts: 16,972 ✭✭✭✭Flinty997


    This has been a problem for many years before AI. Company's cut the investment in training and mentoring. Then complain they can't get people with the right skills.

    We've got mediocre middle management with few skills but talk the talk. But they create a lot AI slop which they don't understand.



  • Advertisement
  • Registered Users, Registered Users 2 Posts: 377 ✭✭backwards_man


    That has not been my experience. I have worked almost my full career in US multinationals and the training budgets are generous. Most people dont use them. People want to be spoon fed. Training and upskilling takes hard work and effort.

    Post edited by backwards_man on


  • Registered Users, Registered Users 2, Paid Member Posts: 14,214 ✭✭✭✭Cluedo Monopoly


    Copilot is powered by OpenAI. It has ChatGPT LLMs under the bonnet.

    What are they doing in the Hyacinth house?



  • Registered Users, Registered Users 2 Posts: 331 ✭✭babyducklings1


    Ok maybe this is a stupid question but is Ai all going to be monetised. So for example at the moment you can go into Ai and get it to do tasks for free . Like chat GPT etc. But eventually will it be a paid subscription for everything,

    I think it will be but am very much open to correction. When internet started stuff was free then later everything became a subscription.

    So it might be if you have money you pay for the AI , if your job is gone you can’t pay for it anyway. I’d say hold onto books be size AI is taking peoples brains. Yes it’s good but does make mistakes as well.

    The other thing is I can’t see governments allowing this to be a free for all. There would be no thinking, no discovery, no intellect, no light bulb moments. Maybe the end of all thought!



  • Registered Users, Registered Users 2 Posts: 16,972 ✭✭✭✭Flinty997


    It is monetised. It's quite expensive for companies to buy. I expect in companies some people will have it and some people will not. It's too expensive to give to everyone if they aren't using it productively. Which means they will measure who uses it and who doesn't.



  • Registered Users, Registered Users 2 Posts: 6,556 ✭✭✭silliussoddius




  • Registered Users, Registered Users 2 Posts: 4,599 ✭✭✭joseywhales


    I did a little experiment at work a few weeks ago. A few guys in my team has been given the charge of Claude adoption for the last 3 or so months. We got a question from a business user about what seemed to be an erroneous order (trading) , it looked to them like it was missing some data. Basically theses orders present as a graph structure with multiple nodes on a hierarchy. I am probably an expert, given the questions I'm the org always funnel toward me when they get difficult enough. So I told the guys to use Claude, spend a day or two, see if it could beat me. I was a bit nervous and also curious but in fairness I had the deck stacked in my favour, given I have the context in my brain. Anyway Claude failed miserably but even worse it failed in a very ungraceful way. The guy had some context , so he did give it information on structure, it had direct access to all databases that applied. It completely hallucinated a solution. At first I got a shock, it confidently went down a path I had never considered before, using different identifiers, different pattern. I was worried and excited I would learn something. But it was complete nonsense. The main weakness was this:

    The question from the quant was asserted with authority(these clowns often do this and are often wrong, it's a strategy they use to expedite an investigation by being alarmist) but he made a few incorrect assumptions implicit in his question. The questioner did not understand the data himself. Claude treats the questioner as some kind of all knowing God, so it will jump through hoops to satisfy the questioner who has bad context. These llms are highly biased to satisfy the limited user that engages it, they do not push back and challenge us enough, which may make us feel satisfied with small test cases, reinforcing our own assumptions about problems but actually create large issues in the future potentially. Even if so can write code better than devs, it has no value unless there are devs that understand it and we lose the value of feedback and corrections in our understanding that we get by struggling with logic during implementation. Code is not a commodity , it is also a tool we use to teach ourselves and challenge our logic.



  • Registered Users, Registered Users 2 Posts: 16,972 ✭✭✭✭Flinty997


    Rubbish in rubbish out.

    If you were writing the prompts it would more successful. It makes no sense to have someone who is making incorrect assumptions directing the AI.

    The AI needs direction and control.



  • Registered Users, Registered Users 2 Posts: 4,599 ✭✭✭joseywhales


    Then I'd question the value of ai. Generating code is about 5% of the effort of solving the problem.



  • Registered Users, Registered Users 2 Posts: 16,972 ✭✭✭✭Flinty997


    Seems to me your complaint was it strayed beyond the restrictions it wasn't given.



  • Advertisement
  • Registered Users, Registered Users 2 Posts: 924 ✭✭✭bored65


    And that’s why I don’t buy the whole AI will kill software engineering meme and last weeks saaspocalypse where various software companies lost massive amounts of their stock price

    Writing code and tests is usually a small fraction of an engineers day, and these companies that hire engineers still have much longer backlogs and feature wish lists than even with these tools increasing productivity by up to 25% would be able to address anytime soon, and of course amount of work always grows

    As mentioned in stocks 2026 thread I was busy cherrypicking great companies last week that have wide moats and platforms that will only grow larger as they have access to same ai tools and whose engineers will use these tools.

    Just look at this site. Despite ai tools it remains buggy and hard to use and continues to be outcompeted and grown by Reddit engineers using AI tools



  • Registered Users, Registered Users 2 Posts: 4,599 ✭✭✭joseywhales


    Well it did provide a solution that was actually incorrect even given the admittedly weak constraints it was given. It actually produced code that incorrectly classified orders. Instead of just saying some form of "cannot solve, not enough information provided".



  • Registered Users, Registered Users 2 Posts: 16,972 ✭✭✭✭Flinty997


    Why not iterate until it was accurate. How often is code perfect v1.0



  • Registered Users, Registered Users 2 Posts: 4,599 ✭✭✭joseywhales


    Yeah true but like the dude spent a day or two at it like it was definitely version 8 or 9. I did my own thing in parallel. I would not feel comfortable without being very specific and working through the problem to understand it and the only way I can do that is by literally stepping through data and logic myself. So in essence writing code. Everytime it produces code , I'd have to spend hours breaking it down , thinking of edge cases , asking for iteration, parsing it's output again.



  • Registered Users, Registered Users 2 Posts: 16,972 ✭✭✭✭Flinty997


    The reason this site is buggy it more to bloody minded stubbornness than anything to do with tools. Scope was too large for the resources and those resources were further under resourced. Rather than cutting their cloth to fit the resources they just kept digging a deeper hole. It has never recovered.

    AI isn't automatic pilot. It's needs to be guided. But it needs someone who understands how to use it to ask it.



  • Registered Users, Registered Users 2 Posts: 16,972 ✭✭✭✭Flinty997


    "....Setting parameters for agentic AI involves configuring autonomy levels, tool access, and operational constraints to define how an agent perceives, reasons, and acts. Key settings include setting guardrails (e.g., human-in-the-loop approvals, max iterations), defining data sources, and managing model temperature for creativity versus precision.."

    A Lot of developers think they can wing it with AI. We've had that with a few senior Devs they just leap in it cold. The AI team refused to stand over what the Devs had done with AI because they didn't know what they were doing. How many do some AI training before lashing into it.

    Same with business managers expecting AI to extract meaningful intelligence from libraries of documents with undefined jargon, undated, unclassified and zero metadata.

    Garbage in garbage out.

    I did the same myself at the start. I've seen massive changes in copilot AI over the last 6 months. It's made massive leaps.



  • Registered Users, Registered Users 2 Posts: 106 ✭✭CatLick


    AI will iterate at an ever faster pace and AI companies will put frameworks and specifications in place to drive that. Time will tell but compare any car from today with a car from 100 years ago. That's what will happen with AI/Software just much quicker.



  • Registered Users, Registered Users 2 Posts: 16,972 ✭✭✭✭Flinty997


    Cars (ICE at least) are not the best example. They still fundamentally work the same. Something like computers might be better and they've existed in half as long.

    Only problem no one will be able to afford it soon or any computer. We will rent time on a computer.



  • Registered Users, Registered Users 2, Paid Member Posts: 8,825 ✭✭✭plodder


    I've changed my view on AI since I started the thread and I spent some time over Christmas learning a bit about it. I believe it is basically sentient, for the simple reason that it understands ideas and concepts in much the same way that humans do. Or if it isn't sentient, then AI is going to change our understanding of what sentience actually is(*). The huge innovation that happened in 2018 was in how its understanding of words is efficiently altered based on the context provided by words preceding it in any passage of text. A good example I saw, is the following word whose meaning is altered based on the words before it.

    Tower - a large tall structure
    Eiffel Tower - a specific tower located in France
    Plastic Eiffel Tower - something totally different, not like a tower very much at all, a souvenir you might use as a key ring.

    AI is able to comprehend these distinctions and actually understands what a "plastic eiffel tower" is.

    This is the way it predicts the next word in a sequence - by absorbing all of the context in all the preceding words (at least a large number of them) in a text. And that's not even taking account of the innovations in the last 6 years which I have no idea about.

    Its limitation is not in its ability to understand but in the amount (and quality) of its training imo.

    Which doesn't make it any less terrifying. But, it's pointless trying to stop it. No more than the luddites who vandalised the mechanical looms or the newspaper printers who tried to stop digital printing presses back in the 1980s. Mechanisation made many incredibly skilled manual crafts redundant and AI will do the same for many knowledge based skills. The idea of making clothes by hand, or printing documents or books by manually placing metal type on a frame, would be absurd to us today (outside of certain nostalgic niches). So, it's hard to see the same not happening with AI.

    The one thing that will slow it imo, is in domains where expertise is not readily accessible for training. An example given earlier was law. AI will (and probably already has) suck up public databases of legal judgements and may make a good fist of predicting future ones, but the advice a lawyer gives on how to resolve a legal problem, whether to take a case or not, is not typically in the public domain. So, AI is not going to be a great help with that any time soon. And of course, craft skills that have survived mechanisation so far, will be in demand more then ever.

    (*) Hugh Linehan has a good piece that addresses some of this in today's IT

    https://www.irishtimes.com/media/2026/02/23/the-inconvenient-truth-about-artificial-intelligence/

    “The opposite of 'good' is 'good intentions'”



  • Advertisement
  • Moderators, Category Moderators, Science, Health & Environment Moderators Posts: 9,573 CMod ✭✭✭✭Fathom


    Knee jerk reaction: Is AI a threat to my current job? Not today. Perhaps tomorrow. If so, I will adapt.

    Will Gen AI be a useful tool to explore big data empirical generalizations of today’s and tomorrow’s unknowns? Will we be able to differentiate between what’s valid or what’s slop?

    Disclaimer: Admittedly, I need to get up to speed on this rapidly evolving technology before I can answer the above questions with some understanding.

    Cmod Science, Health, and Environment



Advertisement
Advertisement