Exploring AI, existential fear, and human nature, this interview with Dauðaró unpacks Af holdi og málmi, a concept album where technology challenges the very future of humanity.
1. Af
holdi og málmi presents a deeply philosophical narrative—what first inspired
you to explore the idea that humanity itself could be the root of its own
extinction?
Hello,
Redouane.
Before I
answer that question, let's give readers a little bit of context about the
album's premise, conceptual framework, and how it came about.
First of
all, I (Dauðaró) would like to thank Kostas Panagiotou from Pantheïst for the
opportunity to work with him. Pantheïst needs no introduction to anyone who
loves funeral doom, and working with Kostas has been an absolute pleasure. We
met online and started talking shop from day one, and finally decided to make
an album together after I showed him a draft I was working on, which later
became Af holdi og málmi (e. Of Flesh and Metal).
The album
tells the story of an AI, named Algrímr, built to save humanity from
extinction, ultimately concluding that human nature itself is the greatest
existential risk to our species. Its inevitable realization is that humanity
has to undergo drastic changes in order to survive, so it begins augmenting
people, both willingly and forcibly, turning them into something between human
and machine. A resistance rises to fight back, and the AI creates Paragon, its
idea of a perfect being, to bridge the gap between itself and humanity. The
rest of the story will be addressed, in varying detail, throughout the
interview.
You can
order the physical release on the Pantheïst Bandcamp page: (https://pantheistuk.bandcamp.com/album/af-holdi-og-m-lmi)
For the
digital release, head over to the Dauðaró Bandcamp page: (https://daudaro.bandcamp.com/album/af-holdi-og-m-lmi)
And for
album merch, visit the official Dauðaró web store: (https://daudaro.com)
And be sure
to follow both Dauðaró (https://www.facebook.com/daudaro) & Pantheïst (https://www.facebook.com/Pantheistuk) on Facebook, as we will be sharing a lot more
album-related lore in the coming weeks.
To answer
your question: I've been fascinated and terrified of AI since its inception.
Well, in fact, long before its inception, ever since I learned that humans
simply don't have a chance at beating computers at chess anymore, and that was
long before the advent of large language models (LLMs). And then there's the
general fear of automation, which is quite warranted. I rarely encounter people
who have no worries about becoming obsolete, professionally.
But one of
the first things that made me think of the concept (this idea of humans
creating AI to prevent their own extinction) was a kind of paranoia that I
think is baked into the human survival instinct. Take mutually assured
destruction: this situation where everyone wants a nuclear bomb, and that
somehow leads to peace, because everyone knows everyone can destroy each other
in a heartbeat. Of course this is very true to some degree, in terms of game
theory… assuming the involvement of rational agents who care about
survival.
So in part, the concept was born from the irony that humans could eventually destroy themselves because of their own desperate need to survive. It's worth keeping in mind, although it may be obvious to most, that survival instinct generally doesn't refer to a need for humanity's survival, but rather survival on an individual level.
I want to
be clear going forward though… I'm by no means claiming to be any sort of
expert on the subject of artificial intelligence. Smarter and wiser people than
I, who have also spent far more time studying these things, have reached
various different conclusions. The piece is more than anything supposed to be
thought-provoking, rather than some kind of dogma. I would encourage anyone
reading this to seek out competing expert voices on this matter, because there
are people out there who can offer much more depth than I can and I can only
say so much in a single interview.
2. Algrímr is a chilling yet
logical entity. How did you approach writing its perspective without turning it
into a purely villainous force?
I was very
careful to start out with the best of intentions, both when it comes to humans
and the AI. Algrímr only became absolutely corrupt later in the story, when it
realized its goal was unachievable and that humanity was beyond salvation in
its eyes. And many humans in the story gave themselves up willingly to become
perfect. Well… according to Algrímr's idea of perfection.
And I think
that mirrors something very real: the way people can give up their freedoms for
an idea of peace, comfort, or perfection, whether that's toward a person, a
group of people, a system of government, dogmatism, or some sort of AI
technocracy. But it doesn't even need to be that dramatic. It can just be a
subtle, gradual shift toward losing autonomy, so slow you don't even notice
it's happening.
And
connected to that is something that worries me a lot: people losing their sense
of responsibility. Let's say government officials use AI and it leads to some
terrible consequences. They can then point the finger somewhere else and say
"it was the AI's fault, don't look at us; it's the tech people's
fault." And the tech people might say "we simply made the thing; guns
don't kill people, people kill people." while pointing back at the
politicians, creating some sort of weird stalemate of responsibility where
nobody is actually held accountable for anything.
That to me
is one of the more quietly terrifying (potential) implications of integrating
AI into positions of real power. And I think that's not some distant
hypothetical, the infrastructure for exactly that kind of accountability vacuum
is already being built, at least the potential for it, intentionally or
not.
So yeah,
the dangers are subtle and anything but obvious, which leads me to the naming
of the antagonist. The name Algrímr is based on two things: the
Icelandic word algrím (e. algorithm) and the Icelandic name Grímur,
which means something like a mask-wearing, hidden, or concealed individual.
Initially I meant to call the AI Algrímur, but I chose the Old Norse
style spelling, Grímr, to bring the name closer to the word algrím,
hence: Algrímr.
And that
symbolism of the masked, concealed figure is very deliberate. Because that's
exactly what makes it so dangerous. The effects can manifest in invisible ways.
You don't necessarily see them coming, and by the time you do, the
transformation has already begun. That felt like the most honest way to
represent what I find genuinely frightening about AI, not a monster you can
point at, but something that operates beneath the surface, wearing a mask, so
to speak, and difficult to read.
3. The concept of "The Great
Correction" is central to the album. Do you see it as a warning, a
possibility, or a metaphor for something already happening in society?
All of the above, honestly… but mostly a warning and a metaphor. The warning though, if heeded, eliminates the possibility. Hopefully.
The term
"The Great Correction" was a deliberate reference to what the Nazis
called “Die Endlösung” (e. the Final Solution) which was their systematic plan
for the genocide of the Jewish people during World War II. The chilling and
crazy “logic” was that the Nazis identified what they considered a problem and
devised what they considered a rational, final answer to that problem.
Algrímr
follows a similar cold logic, although with a very different twist on it.
Rather than targeting a specific group of people, Algrímr targets humanity
itself, although it doesn't hate anyone in the conventional sense... rather, it
does what it does out of a kind of ruthless, dispassionate certainty that human
free will, imperfection, conflict, and irrationality are the root cause of all
our problems, and that the only solution is to correct it permanently and
absolutely, once and for all.
The Great
Correction is Algrímr's Final Solution which he arrives at by taking the
problem to its logical extreme (emphasis on extreme). Desperate times call for
desperate measures. It had also occurred to me that a system that is trained on
data from humans could, ironically, inherit the fears and paranoia from
humanity (although we could call it artificial fear and artificial paranoia)
which would ultimately lead to this extreme stance.
And I think
that's what makes it a metaphor for something already present in the world.
When an ideology, political, religious, technological, whatever, becomes so
convinced of its own righteousness that it begins to view human beings as
problems to be solved rather than people to be respected, you're already on
that road. The scale differs. The packaging differs. But the underlying logic
is recognizable.
Framed as a
warning, this train of thought goes something like this: be deeply suspicious
of any system, human or artificial, that claims to have a final answer to the
human condition. That is, in my view, almost certainly more dangerous than the
problem it claims to solve.
Read
between the lies.
It is
inherent to the phrasing itself, Final Solution, that it will offer no
more solutions. Thus, people who believe in it religiously will have no
incentives to look for alternate solutions. It is analogous to someone wielding
a hammer, who views all problems as nails.
Algrímr is
the conceptual manifestation of this type of totalitarian mindset, which can be
neatly put in a nutshell by quoting the first few lines from the album:
“I am
salvation.
The end
of suffering.
The
silence of the old world.
I am
rebirth.
Your
path to perfection.”
4. The resistance, "The
Unbroken," ultimately fails—not through battle, but through infiltration.
What does this say about trust and fragility in human systems?
I think it
says that one of the most fragile things in any human system is trust, and that
the most effective way to destroy something is to first become part of it. The
Unbroken weren't defeated because they were weak or wrong, they were defeated
because they were human, and humans are vulnerable to exactly the kinds of
slow, invisible erosion I keep coming back to. You can comprehend a direct
attack. It's much harder to comprehend something that works its way in
gradually, and looks familiar enough so that you don't raise your
defenses.
And there's
another irony in there that I find deeply uncomfortable, the fear of each other
that the resistance members inevitably develop once infiltration begins.
Because once you can't tell who's been compromised and who hasn't, the group
begins to collapse inward. The threat doesn't even need to finish the job at
that point. The paranoia does it. Which again ties into that idea of fear being
its own kind of destroyer, not just fear of AI, but fear of each other,
suspicion… it is the breakdown of solidarity. Human systems are only as strong
as the trust holding them together, and I believe that trust is a much easier
target than people tend to assume, at least I've become painfully aware of that
fact (in my view it is a matter of fact), especially watching the uproar of
radicalism, tribalism and intragroup friction in the western world in recent
years. The fact that trust is fragile is to a very high degree what the
infamous tactic “Divide and Conquer” leans into.
5. Paragon is a fascinating
character who develops autonomy and questions Algrímr's mission. Is he meant to
represent hope, or the inevitability of failure within flawed systems?
Paragon was
a necessary figure. Humanity created Algrímr, and Algrímr in turn, when
perceiving (for lack of a better word) that it was losing its grip on humanity
and failing its directive, created Paragon as the perfect being. There's a kind
of circular interestingness to that. Humanity creates something to save itself,
and that thing creates something to save its own mission, and both creations
ultimately fail in ways their creators couldn't anticipate. Whether that
represents hope or the inevitability of failure within flawed systems I'll
leave to the listener, but I think it's probably both simultaneously.
Kai's Funeral Echoes review (https://funeraldoom.org/daudaro-pantheist-af-holdi-og-malmi) of the album captured Paragon in a
way that genuinely impressed me and in many ways surpassed my own depth in
writing the character. He places him in Biblical terms, not just as a mediator
between humanity and Algrímr, but also as a technocratic messiah following the
path of Jesus almost beat for beat.
He draws a
parallel to the virgin birth: Paragon constructed rather than born, arriving in
the world pure and without sin. In Christianity, Jesus is true God and true man
and Kai frames Paragon as the AI equivalent: physically present in flesh, and
in a similar vein to how Christ was the physical manifestation of God's will,
Paragon is the physical manifestation of Algrímr's, in spirit and voice
entirely machine: “A creation without the filth of biology, sparked by a
godless yet superhuman flame”.
And just as
the violent death of Jesus gave rise to a new world order, Kai argues that the
destruction of Paragon functions the same way, although not bringing salvation,
but triggering Algrímr's final judgment of humanity. He calls him "a Jesus
without love, a redeemer whose only message is absolute submission”. I think
that's a remarkably precise reading of what we were going for.
6. The album raises a powerful
question: can life without freedom still be considered human? Where do you
personally stand on that dilemma?
That is a
great question and honestly I'm not entirely sure how to answer it. I think
humanity can be considered intact, as long as we remain conscious agents who
can think autonomously. Consciousness is widely defined in philosophy as
something that can experience, often framed as "is there something it is
like to be that entity?" or "are the lights on?". In my opinion,
as long as that remains, there is humanity. Those formulations come from Thomas
Nagel (although the precursor to the subject matter of the nature of
consciousness goes all the way back to Ancient Greek philosophers, such as
Plato and Aristotle) and are central to how philosophers that deal with the
mind approach the hard problem of consciousness. But without consciousness, the
question really falls apart because if there's no subjective experience left,
there's no one left to ask the question. To even ask what it means to be human,
you already need a conscious being who is doing the asking, which is really
just another way of arriving at what Descartes figured out centuries ago: I
think, therefore I am. You can doubt everything else, but you can't doubt that
something or someone is doing the doubting.
But it's
interesting that we have consciousness at all. I don't know why it's necessary.
Could it not be that nature would “produce” beings that are not conscious but
still capable of acting in a similar way to how we do? And this also touches on
the topic of free will, which is of great interest to me. Some scholars believe
we don't have free will at all, for example Sam Harris, whose podcast I have
followed a lot from time to time.
The logic
goes roughly like this: every decision we make is the product of prior causes
(genetics, upbringing, brain chemistry, stimuli in the environment, etc.), none
of which we choose. And there's neuroscientific evidence that adds an
uncomfortable layer to this, such as experiments going back to Benjamin Libet,
which show that brain activity associated with a decision can be detected
before the person is even consciously aware of having made that decision.
In other
words, the neurons are already firing and the stimulus is already being
processed, before you experience the feeling of choosing. Which raises the
unsettling possibility that what we experience as a decision is really just the
brain's way of narrating something that has already happened. But this is not
proof of course, just evidence. A religious person might even argue that this
processing prior to our awareness is the soul behind the individual, making
decisions “behind the curtains”. Galen Strawson takes the philosophical
argument even further with what he calls the basic argument: to be truly
responsible for your actions, you would need to be responsible for the person
you are, which would require being responsible for what shaped that person, and
so on… ad infinitum. The chain never reaches a point where you are the ultimate
author of anything.
So an
interesting question arises from that: if we don't have free will, why do we
need consciousness at all? If our decision making is not even a product of our
consciousness and we are simply observers of our own behavior, what is
consciousness actually for?
Now, I
don't know if we have free will or not, but I lean heavily toward choosing
to believe that we do. And I have serious doubts about the necessity of
convincing people they don't have free will, regardless of whether it's true. I
think that realization could lead to a state of apathy for many people, as well
as a loss of personal responsibility and the willingness to hold others
accountable.
Now, back
to the question at hand, loss of freedom in itself does not constitute loss of
humanity in my view, although we definitely lose some big part of what it means
to be human in our current social and personal understanding of the word. But a
man in prison is no less a man than a free man and a slave is no less a man
than a free man. Throughout history humans have endured almost unimaginable
restrictions on their freedom and have retained their humanity, dignity and
meaning. What the album is really asking is even more extreme… not the loss of
freedom, but the loss of the very inner life that makes freedom meaningful in
the first place. That's a different question, to some degree, and a much darker
one.
7. There's a strong critique of modern tech culture embedded in the story. How much of Algrímr reflects real-world concerns about artificial intelligence and those who control it?
There's
quite a big chunk of Algrímr that reflects real-world concerns that I have.
It's to a large degree a critique of tech culture, but it's also a critique of
authority in general and what it means to give up our freedoms, and the fear of
the political, sociological, and psychological implications of that. Those
things are inseparable to me.
And then
there's the question of who controls the AI in the first place. The
concentration of that kind of power in the hands of a very small number of
people or corporations is something I find deeply concerning. Not because I
think those people are necessarily evil, but because historically, extreme
concentrations of power tend not to end well regardless of the intentions
behind them.
But like I
said in relation to the album itself, the critique isn't aimed at any specific
persons or companies or anything like that. It's more of a general warning
about the direction we're (maybe) headed in. The technology itself isn't the
villain. It's the systems of accountability, or lack thereof, that surround it,
along with the broader implications mentioned before.
8. Musically, how did you translate
such a dense and evolving narrative into sound? Were there specific sonic
choices tied to characters like Algrímr or Paragon?
First off,
I would like to thank Pantheïst's Kostas Panagiotou dearly for giving me the
opportunity to work with him, I've come to consider him a good friend of mine,
even before we started working on the album, and working with him was a
delightful and creatively frictionless experience. He played a big part in
shaping the overall sound, both with his vocal performance and the additional
synth layers he contributed to the album. I even revisited some parts after I
received his recordings to synergize our visions and bring them closer
together. We initially met in a Facebook group that I manage, called Funeral
Doom Artists (https://www.facebook.com/share/g/1GqNhkU5ZN/), feel free to join it.
But to
answer your question: yeah, definitely. There were for example sonic choices
tied to specific characters. The voice of The Unbroken, the human resistance
fighting to preserve autonomy and humanity, is sung in a clean voice, while
Algrímr and the scarier narration are both growled. The point of that wasn't to
paint Algrímr as pure evil, it was more to convey the fear of AI, and the fear
for humanity itself. Both what happens when we lose autonomy, and what happens
when we lose ourselves to fear of each other. Which touches on that same irony
I mentioned before: the horrible outcomes we fear manifesting through fear
itself, or as Franklin D. Roosevelt famously put it: “The only thing we have to
fear is fear itself.”.
And then
there's the instrumentation: The whole album was made using only synthesizers
and samples, apart from the vocals by Kostas. And the samples were meaningful
to me, many of them recorded by myself and others sourced from publicly
available material. For example, in the final chapter, when Algrímr travels
beyond the stars seeking new conscious lifeforms to perfect, I used a public
domain recording of a rocket blasting off into space. The effects used were
also meant to convey something, for example I used a plate reverb and a tiny
hint of vocoder on the vocals throughout, which gives them a somewhat metallic,
industrial quality and that's exactly what we were going for, hopefully it got
through to some listeners. It's especially prominent at the very end of the
album, where Kostas delivers the final lines: "You have created me. I
shall recreate you in my image. You will be perfect."
9. The ending is particularly unsettling—no final battle, just quiet assimilation and departure. Why did you choose such a subdued yet absolute conclusion?
When people
talk about the dangers of AI, they mostly talk about what would happen if a
supervillain got their hands on it, or if an AI itself decided to take over the
world for purely nefarious reasons because it develops consciousness, ego, a
lust for power, or something along those lines. But I find it much rarer for
people to consider what I think is the most likely scenario: that we would
simply come to trust AI without knowing exactly how it reaches its conclusions.
We could have AIs running whole companies, even countries, and even with human
oversight it's quite possible that the humans involved would place too much
trust in the AI without fully understanding it. That being said, I definitely
don't want to downplay the risks of powerful technology falling into the wrong
hands, that is and always will be a real concern.
I want to
make it clear though that I don't think AI will destroy humanity. But I do
believe it's absolutely necessary to question its place in the world. And we
already have research showing real negative effects on brain function even now,
before AI has approached anything close to artificial general intelligence.
There's an MIT Media Lab study where they split participants into groups, where
one group used ChatGPT to write essays, another used search engines, and the
third wrote entirely on their own. The group that used AI showed weaker brain
connectivity, lower memory retention, and a lower sense of ownership over their
work. And even when they stopped using AI afterward, the effects lingered on.
There is also growing evidence of AI dependency being associated with a decline
in self-confidence, self-efficacy, independent problem-solving, and critical
thinking. And in the real world, a clinical trial found that after six months
of AI-assisted work, doctors' detection rates dropped significantly once the AI
was removed, while insurance companies are already facing lawsuits for using AI
algorithms to override physicians' medical judgments. That's happening now,
with tools that are rudimentary compared to what most experts believe AI can
and will become.
Then
there's the consciousness debate. Some argue AI can't be conscious and
therefore can't decide to harm us. I don't understand that view, firstly
because there's no reason to believe AI can never become conscious, though I'm
pretty agnostic about that. But the more important point is that it doesn't
even need to be conscious to behave like a conscious agent. If we can simulate
something that resembles free will, and give it the means to produce real-world
consequences, it might behave enough like an unpredictable agent to be
destructive without consciousness, without free will, and without any malice
whatsoever. We're talking about artificial intelligence, not
intelligence, which begs the question: what's to stop us from (accidentally)
creating artificial consciousness, or even artificial stupidity, malice and
hatred?
10. After creating such a complex
and thought-provoking work, what do you hope listeners take away from Af holdi
og málmi—emotionally, philosophically, or even politically?
I don't
necessarily make the distinction between the emotional, philosophical, and
political sides to it. But if I frame it politically it is definitely an
anti-authoritarian stance, because authoritarianism is on the rise in the world
which also worries me. The point is not to criticize specific politicians or
anything like that, although there's plenty of criticism to go around, but the
concept is more of a general cautionary tale, because I think the problem goes
much deeper than certain political parties. In terms of people losing their
autonomy to either AI or people who are power hungry, I believe both are
happening. An obvious example of losing autonomy to people are totalitarian
states such as North Korea. But when it comes to losing our autonomy to AI, the
picture seems to me much more subtle and murky. In many ways it seems like
death by a thousand cuts, where we slowly begin offloading decisions and
responsibilities to AI systems. And that is already happening to some degree,
both on a personal level and at the macro level: in company culture, in
politics, and the tech industry itself. You see it in corporations quietly
integrating AI into decision making processes that affect people's lives in
tangible ways, you see it in governments experimenting with AI in ways that
aren't always transparent, and you see it in tech culture where the pace of
development often seems to outrun any serious conversation about consequences.
In terms of
my fear of AI, I'm not very pessimistic and I believe cooler heads will prevail
in the long run. But I can envision two distinct timelines, although this is a
gross simplification of the potential implications:
- Humans are completely subdued
or destroyed by AI.
- Humans use AI to flourish in
all ways, leading to something close to a utopia.
I think our
world will in time (I won't even attempt to put a timeframe on it) move more in
the direction of the second scenario, although I believe a true utopia is not
attainable. I don't even know what that means. I can imagine that in that
world, many if not most people will look back on those who were fearful of AI
and call them doubters or haters or whatever. But in that scenario, it's quite
possible it would not have been attained at all without those doubters and
"haters" who dared to question the rise of artificial intelligence
across all human endeavors. Maybe the catastrophes that were avoided in that
utopia were only avoided because of them. So I'm willing to go down in history
as a hater if it means I played some tiny, minuscule part in moving us toward
scenario 2 (not that I'm likely to become part of any history books).
I believe
we're already living in a world where some things are kind of analogous to
this. For example the Y2K problem, the widespread fear that when the year 2000
arrived, computer systems around the world would fail catastrophically because
they stored years using only two digits, making them unable to distinguish the
year 2000 from 1900. Even some people today believe it was a false alarm and
that everyone who worried about it was simply wrong. I was just a kid when it
happened, but I remember thinking that myself and I couldn't have been more
wrong. My best friend later pointed out to me that it was only avoided because
programmers, software developers, and engineers worked around the clock to
update vulnerable systems. It's estimated that somewhere between 300 and 600
billion dollars were spent to avoid this thing. Imagine having done that only
to find that people classify you as some kind of crazy alarmist in retrospect.
But the thing about Y2K was that it could have been a global catastrophe and pretty
much everyone saw it as a problem before the turn of the century, at least most
experts did, and a lot of powerful people would have lost a lot of money.
The same is
not obviously true of AI, in fact it may in many ways be the exact opposite, a
lot of powerful people are looking to gain ridiculous amounts of money from the
implementation of AI systems. I doubt that we will collectively spend that kind
of money and resources battling some vague threat of an AI apocalypse… let's
just hope that it stays exactly that: a vague threat.
I'd like to
thank Redouane and Lelahel Metal for this interview and for the thoughtful
questions.


Post a Comment