Block 9
Put Human Intelligence First!
In this chapter we look at the information literacy angle of potential AI training activities, reminding you to use tools ethically and put critical, rational thinking first.
THE ARTIFICIALITY OF INTELLIGENCE
As was discussed in BLOCK #2, the information that AI produces is based on a lot of variables, namely the data it has access to. Therefore, only good and reliable data will ever be able to give you the answer you seek, starting from the best soup recipe and ending with complex coding solutions or science-based questions. Unless we have trained the AI model by ourselves and know exactly what type of data we have “fed” it, there is no way for us to completely validate the quality of its knowledge base. What we can validate, however, is its answers.Although most LLMs (large language models) can detect context, the nuances and intricacies of the topic you are trying to explore might still be largely lost to the AI you are “talking to”. Remember that most AIs are trained to give you the statistically most likely answer based on previous data, not necessarily the most factual or truthful one. What should immediately set off a red flag in your mind is if an answer given by AI contains something you know to be untrue. But what if it is a topic that you are not very knowledgeable about? Then it is back to good old sources. Either ask the AI for a definitive source and if one is given, check the claims made, or do your own research by simply putting the AI’s claim or fact into a search engine, database, or any other source you deem trustworthy.
The same goes for recognizing AI-generated content in the information space. Are there news stories, official social media posts or any other information that supports an existence of, say, a video of a celebrity or a politician doing something unusual? Do images contain inconsistencies? Maybe you can spot an AI tool watermark somewhere? AI generated audiovisual content is becoming better and better, but there still are some giveaways and ways to verify its believability.
Remember that most LLMs are trained to always provide you with an answer, even if their data is insufficient. This can result in what are called “hallucinations” - either false information stated as true when it comes to text and voice outputs, or mild to wild inconsistencies in visual outputs (generated images or videos). Not only are most AIs trained to consistently be helpful; they are also very good “actors” when it comes to empathy. While it is interesting or even tempting to discuss personal matters or mental health questions with an AI, remember that chatbots do not have feelings. They may seem like they provide some sense of comfort in a stressful or even upsetting situation, but they will never replace real human connection and competence. AI models do not possess enough personal medical expertise and context to develop treatment plans, nor can they provide a fully precise diagnosis. Serious issues like these cannot be solved by individual users and should still be left to certified experts, not AI. This concerns not only physical and mental health, but also matters of construction, mechanics, law, personal security, and many others.
DATA SAFETY
A question often asked by those most careful about using AI is tied to the personal safety of their data, which is used as an input for the conversation or prompts. Protecting your privacy while using AI is not about avoiding the tools entirely, but about understanding the “data trade” you make every time you click “send”. When you interact with an AI, your prompts and uploaded files are typically stored on the providers’ cloud servers to facilitate the conversation and, by default, for model training purposes. This means your data may be used to train future versions of the AI, potentially allowing traces of your information to influence the model’s future outputs. To maintain data safety, you must treat every prompt as if it were a public post unless you have verified that you are using an enterprise-grade, zero-retention, or no-training version of the service.To find out what happens to your data, either check the very long and often skipped “Terms and Conditions” of the site where the AI model is stored or check the “Settings” section if it is the site of a specific model. If you are based within the European Union, we suggest using AI models that are either based in or have servers in Europe, as that means they have to comply with EU law when it comes to user data and you will be more protected. It is also advisable not to use an AI you do not trust. A quick internet search will yield hundreds of different sites promising AI features beyond your wildest dreams, but if you are unsure of who the host is and what will happen to the data you are using, it is best to avoid such services.
If you still choose to use private data for any kind of prompt – especially if the desired result is visual – use only data that you have the involved persons’ consent to. Never generate images of people who have not given permission to use their likeness for a specific AI image or video. In addition – and this goes without saying – never use any AI tool to create violent, sexually explicit, rude or any other type of unethical content, no matter if the tool allows or filters out such content. There may even exist legal repercussions if such content is created.
USE AND PROVIDE SOURCES
As we already mentioned, it is important to give your stamp of approval only to the kind of information given by AI that you can check. This involves not only checking the facts of the research on a particular topic, but also if AI has not distorted any information in a summary of a long text, if it has not added any unrelated information and if it has understood the assignment. Sometimes a bad answer from an AI comes from the incorrect or imprecise question given by the user, so always be as specific in your prompts as possible.
Do not be afraid to ask follow-up questions, even after you have seemingly received the needed answer from a chatbot. Where did the information come from? What sources validate a claim? How old is the story that the AI is referring to? Clarification is important, even if it makes you feel a bit like an interrogator.
And last, but not least, provide a note if the information, image, video or any other output that will be available publicly, was created with the help of AI. There is no shame in generating, for example one of those cute little videos where animals do human things – we are fans, too! Just remember to mention that this is AI-generated content if you publish it, so someone does not fall victim to misinformation as a result of it. And, of course, the same rule applies to research, art, photorealistic images, or any other content that could potentially be misleading.