Skip to Main Content

AI

A guide to generative AI.

Overview

In this section we go over some of the concerns about AI, particularly environmental, social, and reliability. 

The following video from Michigan Virtual outlines many of the main concerns about generative AI, which are explored in more detail below. 

Environmental Impact

AI is housed in servers in data centers around the world. Electricity is needed to power them. According to MIT, AI alone consumers as much electricity as nearly a quarter of households in the United States. In a similar manner, Fortune reported that plans for one AI data center could use as much energy as New York and San Diego do in a day. This number is dramatically expected to rise. This has led to a larger carbon footprint, with data centers producing nearly 48% more carbon intense electricity than the national average. 

To keep data centers cool, developers put them along rivers and other bodies of water. The BBC reported that the data centers pollute the water for those living in the area and cause other water related issues. There is also evidence that AI uses a good bit of water, although the data is unclear due to no tracking or record keeping. The University of Illinois Urbana-Champaign suggests that data centers can evaporate up to 3.4 galloons of water per kilowatt of electricity used.

More often than not, AI data centers are built in areas with a lower socio-economic populations. There is often already significant pollution already in these areas, to which the AI data centers add. 

Privacy

AI uses publicly facing material (like social media) and conversations as training material. This has led to concerns of privacy in the AI era. According to Stanford, it can take photographs, and other information shared on the internet and use it to create material. All this is done without our permission. 

The healthcare industry has started to use AI in a variety of manners, such as diagnosis and insurance procedures. Healthcare information is protected by HIPAA laws. Such programs are scrambling as they figure out how to protect patient data, so that they comply with HIPAA laws. 

The question of privacy is particularly important in academia. When using AI, we have to consider FERPA, which prohibits sharing confidential student information with unauthorized persons, or systems. Those using AI for administrative purposes or grading, need to ensure that there is no identifiable student material. If interested, please watch this workshop on AI and FERPA for more information. 

AI also poses privacy concerns for research. Depending on the terms and conditions of the AI system you are using, the data you are uploading could be used either for training purposes and become freely available to others using the AI bot. Feel free to check out this video on AI and research ethics. When using AI, you need to check the terms and conditions for how your material will be used. 

Stereotyping

ChatGPT and other AI tools like Bard and Claude are trained on large datasets from the internet that may contain inaccuracies, biases, or outdated information. The large language models generative AI tools are built on reproduce these errors and biases in their response. AI models have no understanding of context when asked questions, their reply is based on their training on patterns. AI tools are also limited to the datasets they are trained on, which are not up to date, and have no real-time access to information. 

AI tools do not fact-check their response, it remains your responsibility to critically evaluate the information generated by ChatGPT and cross-reference it with other, reliable sources.

Employment

There is a growing fear that AI will replace humans. Already, various employers, including Google and X, have been laying employees. Goldman Sachs suggested the AI will replace those in computer science, legal services, and customer service representatives. There also concerns that AI will replace teachers, though many suggest this fear is unfounded.

Copyright

Generative AI draws from large databases. To create these databases, programmers upload large amounts of material, including books, songs, and more. Such material is often under copyright. By using it, and often quoting it directly without a citation, generative AI violates copyright law. Artists and authors have protested the use of their work in generative AI training models, going so far as to call for laws to protect their copyright. Recently, a group of authors sued Anthropic AI for copyright infringement. They won the case and are set to receive a settlement. 

Hallucinations

When asked to produce a list of articles on a certain topic, tools like ChatGPT often generate citations that seem accurate or have actual experts in the field as authors, but when you search for these articles, you will realize that most are not real articles. ChatGPT and other AI tools put the citations together by "guessing" which words are most likely to appear together.

Additionally, even when AI tools cite a real article or book, the existence of this source does not mean that the information in the tool's answer comes from the same source, or that it has been summarized correctly.