Skip to Content

What if robots wrote love poems?

First post in which the author expounds his motives for launching this blog and takes you – dear reader – on a wild ride through the philosophy of robots, only to wind up at a topic as mundane as copyright.1

Untitled by Julian Hauser

Hello world! I’ve wanted to start a blog on philosophy, technology, and culture for a long time. The fact that you are reading these lines means that – after carrying a nascent idea around in my head for many months – I have finally taken the plunge. My initial research for this blog had only led to more and more confusion, which made me hesitant of publishing it for everyone to see. As you will probably notice, I still don’t understand the topics I want to discuss. But, I have realised that this doesn’t matter – I’ll learn more if I allow myself to make mistakes. So voilà, I present: What if robots wrote love poems!

For those who want to know a bit more about who I am and why I’ve started this blog – and, say, not one about my love for cheese – I refer you to the about section.

In this post I want to get straight to the point and muse a little on this blog’s titular what if? question. I hope to show how seemingly simple or even silly questions give rise to a stream – make that a tsunami! – of further questions, and how, after scratching just a little at the surface, we see philosophy lurking everywhere. I don’t intend to delve into the substance of any of the topics I touch upon. I simply want to illustrate some of the topics I find fascinating, and how I think philosophy is important in talking about them.

So, what if… you got a love poem by an robot?

I picked this blog’s name for a very simple reason: if robots did write love poems, we would have to start thinking long and hard about some very fundamental questions concerning a wide range of topics. And the outcome of all this long and hard brooding will have repercussions not only for our robotic brethren but also for us flesh-and-blood humans. And because such contemplation takes time, it’s probably best to start with it now and not when the first robot hands you – robotic hands shaking – an envelope with a heart carefully drawn onto it.

That would be quite something, wouldn’t it?! How would you feel? Surprised, happy, shocked, scared, weirded-out, valued? And how would you act? Perfunctorily, as with a machine, or with great care, as any non-asshole would with another person? With these questions we arrive at the border to philosophy proper.2 One way of crossing into it is to turn the woulds in these questions into shoulds and change from a descriptive account to one that makes a moral argument.

Well then, how should we feel and act? One reply to this question would consist of the very philosophical manoeuvre of evading it. Yeah, I know, that’s what sneaky politicians do (very often successfully). However, I am going to argue here that dodging the question is not always a deplorable thing to do; some questions really are badly put. Questions always shape the possible set of answers3 and sometimes a case can be made that a question contains implicit assumptions that unduly shape the field of possible answers.

Can robots even write love poems?

This blog’s question assumes something that many – most? – people would deny, namely that robots are capable of writing love poetry. So, even if there were indeed an robot handing you an envelope, and even if that robot asserted – if prompted – that it (or he, or she, or something else entirely; another question is waiting here) wrote a love poem, some people would debate receiving a love poem. Why is that? I think there are at least two grounds on which such a claim can be made: robots can’t be creative or they can’t love. These two challenges are connected to the two terms contained in love poem: if robots can’t be creative then they can’t write love poems and, if they can’t love then they supposedly can’t write love poems.

Love and creativity are some of the most basic human traits (or one could say – some of the most human traits). If we wonder whether machines can love or be creative, we are never far from pondering whether they can love or be creative like us. And thus very quickly human nature finds itself at the centre of our inquiry.

What’s more, it might be the case that machines aren’t creative or loving like we are, but also not outright uncreative and unable to love. This would suggest that their creativity and love is different from ours. Love and creativity would thus cease to be exclusively human concepts4 – an idea that I personally find extremely fascinating. And, if we allow ourselves to think outside the – often very anthropocentric – box, I believe this outcome to be very plausible. That many of our most basic concepts are informed only by our human condition is understandable – but not always justifiable.

However, before forever losing ourselves in these ever-expanding circles of questions, let’s simply assume that robots can write love poems and revisit our initial question. So, back to square one: you have just received a love poem by an robot.

Are robots people?

We have seen before that looking at this situation from the point of view of moral philosophy requires us to think about how we should react. Should you try to be gentle when breaking it to the robot that its amorous feelings are not mutual? Generally, we think of machines – such as cars – as objects that we treat well because they are mainly useful to us when not kaputt or because some other (human) person has some interests regarding it. Unlike a human being, a car does not have any value by itself (called intrinsic value). A car is only valuable because it is an important means for some person to achieve their goals (this is called instrumental or extrinsic value).

I am not interested in answers such as “I should treat the robot well because otherwise it will hit me” as that would simply mean that you value yourself. I want to know whether there can be an answer such as “I should treat the robot well because otherwise it will be upset and sad and that’s a bad thing”. As you can guess, this question is very closely related to whether an robot can feel pain and whether it has interiority or consciousness, and by extension, whether it is a person. Enough topics for about a thousand books (to understate it greatly).

And if that isn’t enough for the next blog posts, here are some more questions on the topic. If we believe that relationships between humans are valuable, is this also the case for relationships between humans and robots? If so, when does it become valuable? A relationship with a simple computer clearly isn’t valuable, but with a sophisticated one – as in the movie Her – the case isn’t so clear-cut. And to crank the weirdness-level up a notch: what about relationships between machines? And just because it’s possible: should we fight for the right to marriage between humans and machines – and between machine and machine?5 There are definitely some interesting and controversial issues that we will see ourselves confronted with in the (not so near) future.

What’s next

While it is enjoyable (at least to me) to let my imagination run wild, and while I do think that these thoughts are not simply silly banter, I admit that their relevance for today’s world may appear a bit limited. I want to finish this post by mentioning a (comparatively) down-to-earth topic that will probably keep us occupied for the next few posts. What if our robot didn’t actually want to give the love poem to you – sorry about that! – but wanted to publish it and make buckets of cash?6

Clearly the robot would have to study copyright law. I have worked on copyright for some time, especially analysing the ethics behind it (see for example my Master’s thesis). Given that this topic currently fascinates many, I have decided that it will form the basis of the next few posts: Should authors be able to control their creations? Should people be free to use the cultural artefacts of humanity? We’ll try to find some answers to these questions that are not simply a continuation of the often hysteric public debate.

If your head is spinning from too many questions and too few answers – that’s a feeling I share. I hope future blog posts – and conversations! – will help both you and me to discover issues that are both captivating and important.


Footnotes


  1. A friend who had been so kind as to read this post before it appeared here gently conveyed to me that this type of beginning might make me appear to be a bit of a nut job. I know that one should never explain jokes, but in this case it might be necessary: the style of this opening line is borrowed from old German stories (such as this one), which used to introduce chapters in this way. I’m not crazy, really. [return]
  2. Philosophy proper – don’t press me to explain what that is, it doesn’t actually exist. [return]
  3. This is actually how questions work. If questions didn’t restrict the scope of possible answers, then any arbitrary answer would be admissible. And then questions really wouldn’t deserve being called questions anymore. [return]
  4. This statement obviously disregards the possibility of animals being creative. This is not what I intend to do; I focus on machines because people see animals as alive and thus take less issue with ascribing to them qualities similar to those of human beings. [return]
  5. A related topic are machine-mediated human relationships. How are human relationships impacted by the fact that they are mediated by the internet, Facebook, or other technologies? Is it a good idea that the rules governing this mediation are set not by the needs of humans, but by need of a company to turn a profit? [return]
  6. If you weren’t the upstanding citizen that you are, you might be tempted to steal the robot’s love poem and publish it yourself. In this case we wouldn’t be dealing with copyright, but with a violation of privacy. This is definitely also a topic I want to cover in the future. [return]