top of page
Search

Hallucinating Bullshit Modulations

  • Writer: Christine Boone
    Christine Boone
  • 8 hours ago
  • 2 min read

I use AI occasionally. I tried to to use it yesterday, to no avail. I was writing an exam for my students, and I wanted to test them on studio processing effects and modulation at the same time, so I went looking for a song that would work to address both questions. I was having trouble thinking of anything, I went to a popular music examples database, and then as last resort, I asked ChatGPT.


My prompt:

"Find a pop song with noticeable autotune that also includes a modulation."


Its answer:

ree

But like...no it doesn't? Auto-Tune, sure. It IS the iconic example. But there is no modulation in that song. I doubted my own memory because of this response (shame on me), and went back and listened just to be sure. I went ahead and said yes to the "Would you like me to suggest a few more modern examples?" question, and NONE OF THOSE SONGS MODULATED EITHER.


I'm both soothed and irritated by this. I'm soothed because it continues to mean that my students can't get very far if they decide to use AI for help with music theory homework. But I'm irritated because so many of the world's resources are going to this technology that (a) is being used for the wrong things and (b) doesn't even do them well! Hearing that a song modulates is one of the simplest things for us to do as humans, even if we don't have the academic terminology to describe it, and this robot is literally just making things up!


I'll give you a surprising quote from an academic journal:


"Because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth-apt without any actual concern for truth, it seems appropriate to call their outputs bullshit."


-Michael Townsen Hicks, James Humphries, and Joe Slater

"ChatGPT is Bullshit" (Ethics and Information Technology)


If you didn't know the song "Believe," wouldn't you think that the answer I was given looked like a great one? I wanted an example to put on my exam, so ChatGPT gave me one, even at the cost of it not being real. The takeaway from the article (which I learned about through this article, which was sent to me by a colleague) is that it can be dangerous to use words for human behaviors, like "hallucinations," for AI, because we risk falling into the mindset that (as I mentioned a couple of years ago) artificially intelligence is intelligent; that it has agency, can think, and worst of all, is responsible for the terrible things that can happen when it goes awry. This is a robot that compiles information, and it was programmed by humans. When it tells someone something terrible, this is those humans' faults.


I'll probably still use ChatGPT occasionally, though never for creative pursuits, always remembering its capabilities and limitations, and ALWAYS double-checking its output. BEWARE OF BULLSHIT!



 
 
 

Comments


Mashademia

© 2014-2025 by Christine Boone

  • Facebook App Icon
  • Twitter App Icon
  • Google+ App Icon
bottom of page