Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

Generative AI and History - google gemini

  • 27-02-2024 10:50am
    #1
    Registered Users, Registered Users 2 Posts: 2,824 ✭✭✭


    I can across this story by accident. It didn’t make a lot of headlines, maybe wasn’t noticed by many who aren’t following AI development, or aspects of the “culture war”.

    google launched their Gemini ai tool in December. One feature of it was image generation.

    It was noticed by some people that it struggled to generate images of “white” people. and seemed to default towards creating images of “people of colour”, and gender balanced. so if you asked to generate images of the Pope you got an African looking man, and an Indian woman. An African man could certainly become pope in future, but it hasn’t happened yet. And a woman pope is not possible.

    At first it seemed like a funny story, and some responses online were able to dismiss it as over sensitive “right wingers” who call everything “woke”.

    But it does raise a lot of questions about historical accuracy and historical memory.

    To address the “white fragility” accusations, if it was impossible to find images of people of colour, especially in the historical context - this would equally be wrong.

    imagine if you asked to generate a picture of Nelson Mandela and the ANC leadership (Tambo, Sisulu etc) and they were all white and half women? Etc etc

    Secondly, as people rely more on more on internet, AI and googling for authoritive information, this does give insights into how countries, corporations or other actors could try and alter historical memory to influence the present and future.

    We can criticize China and the Soviet Union for wiping people and events from history (trostky, tianamem square etc), but should this blantant weirdness from google gemini give pause for thought about more subtle changes ahead?

    Even with all the different view and debates within it, is history as we know it at risk?

    it more important than ever to have sources like books which can be used to counter the shifting sands of political and ideological trends online?

    perhaps blockchain could have a use for cataloging changing sources like Wikipedia

    https://www.geo.tv/amp/532631-from-black-vikings-to-no-yes-or-no-on-paedophilia-google-gemini-gets-wires-crossed



Comments

  • Registered Users, Registered Users 2 Posts: 2,495 ✭✭✭Markus Antonius


    I don't use Gemini but Openai has been very good for having historical discussions. Being able to meander and pursue different historical aspects has opened up a whole new world for me. Not sure I'll have any use for the image generators any time soon. Especially not if it is making Hitler look like Martin luther King



  • Registered Users, Registered Users 2 Posts: 26,998 ✭✭✭✭Peregrinus


    I struggle to see what this has to do with history.

    Nobody imagines that an AI-generated image of (say) a Roman legionary contains or reflects any historical truth, do they? It's produced by an AI system trained on a database of images which themselves may be creative, imaginative, satirical, caricatures, etc, but very few or none of which would be regarded as primary historical sources.

    There is certainly an issue about AI systems absorbing racial (and other) prejudices embodied in the database of images and text on which they are trained, or being skewed by algorithms designed to correct for this but which in fact overcorrect (I suspect is the latter is happening here). But nobody seeking historical information or insight is going to be paying any attention to AI-generated images, are they? Few people can be that stupid.



  • Registered Users, Registered Users 2 Posts: 2,824 ✭✭✭donaghs


    I would beg to differ. It’s early days and not long since Chat GPT made its appearance. But I’ve seen lots of people who take for granted that generative AI is giving the correct information.

    Some people will always like images over text. Rather than tracking down a suitable image, and potentially having to pay licensing fees, or pay an artists, it’s clear that generative AI images will have a role for some people in historical research and content creation.



  • Registered Users, Registered Users 2 Posts: 26,998 ✭✭✭✭Peregrinus


    OK, those people are idiots. And the problem is not going to be solved by improving generative AI; you have to do something about their idiocy.



  • Moderators, Category Moderators, Arts Moderators, Sports Moderators Posts: 50,890 CMod ✭✭✭✭magicbastarder


    it's the 'garbage in, garbage out' issue; if there's bias in the source material any AI system uses, it will just recreate that bias.



  • Advertisement
  • Registered Users, Registered Users 2 Posts: 26,998 ✭✭✭✭Peregrinus


    Generative AI systems have to be trained on a vast amount of material — the more material, the more they learn. And there's a tension between being concerned about the quality, accuracy, comprehensiveness etc of the material you feed them and the desire to feed them as much as possible. Hence they readily learn bias, misrepresentation, etc — particularly those biases, misrepresentations and distortions which flourish on the internet. Which, as we all know from common experience, is a lot.



  • Registered Users, Registered Users 2 Posts: 2,824 ✭✭✭donaghs


    That’s true. But if you look at the way the Gemini output seem to have been skewed, I doubt that this was just based on the materials/input it was fed. Rather there would appear to be a bias filters applying to what was produced. I guess this is moving away from a history discussion.

    from the history perspective, there are many many not so bright people out there who know little of history, or even contemporary current affairs. Google search has become like a “newspaper of record” for getting the best results. So the historical inaccuracies in Gemini are a worrying precedent for how these technologies can pollute the common sense of historical accuracy and historical memory.



  • Registered Users, Registered Users 2 Posts: 26,998 ✭✭✭✭Peregrinus


    I think what's happening here is over-correction. Left to themselves, AIs tends to produce images which appear to us offensively stereotyped (Doctors are always male; nurses are always female) or bigoted (criminals are black males; victims are white females). That's problematic, and users object to it, so software companies who want their AIs to be used (how else are they to make money from them?) introduce an algorithm to try and correct for this bias. But it's a poorly-designed algorithm, and it over-corrects or it operates in contexts where there is in fact no bias (popes are always male; Nazi officers really weren't black Africans).

    As for people being mislead in their historical research, even if you solve the problem of AIs producing implausible images on historical themes, you haven't solved the real problem, which is people imagining that AI is somehow a reliable historical source. We notice racist or sexist distortions and counter-distortions because we are sensitised to these things, but even if we correct those the products of AI will be distorted in many, many other ways that we are not sensitised to notice, and AI will continue to be woefully historically unreliable. There is no short cut to teaching people the basics of historical research, the nature of historical sources, etc.

    (Hypothetically you could, I suppose, develop a history-focussed AI that was trained not on vast wodges of material taken from all over the internet, but exclusively on primary source material. But even then you'd have to teach the AI how to engage in critical thinking, evaluate sources, etc, and I don't think we're anywhere near that yet.)



  • Registered Users, Registered Users 2 Posts: 16,815 ✭✭✭✭whisky_galore


    Not quite AI, but adjacent. I've noticed a flood of "colourised" historical photographs on social media, they get widely shared and not a few books of colourised photos get published.

    The software tends to make assumptions e.g. things that are known to have been green, quite often the software renders as red.



Advertisement