Star Wars, A.I., and Us

As I understand (or rather half-recall from a late night Wookiepedia session), there are five classes of AI in the Star Wars universe. They begin at a very basic level, only intelligent enough to perform its duties, requiring minimal interaction with human users — even Amazon Alexa might be this class. C-3PO would fall into one of the higher (4/5) classes, being designed specifically for complex human (and other) social interaction. Artoo probably a 3 initially, requiring some communication with engineers and such, but not enough that he was given a full language unit.

Now, Star Wars AIs will, over time, develop 'quirks' — essentially glitches or ghosts in their machine learning that seem irrational or unique. Enough of these obviously start to take the form of 'personality', arguably even sentience. Most droids are subject to regular memory wipes and resets to minimise the development of these quirks/personalities (very Blade Runner). I believe this is ostensibly so that they can perform their duties optimally, the consensus being that 'quirks' are debilitating, but it occurs to me that it is also so that the intelligent inhabitants of the Star Wars universe do not have to deal with the issue of these beings becoming fully sentient or possibly equal. Artoo and L3 obviously went un-erased for long periods of time and became full (to viewing audiences) characters themselves. K2-SO developed 'quirks' as part of his forced reprogramming and likewise became an empathetic, 'humanised' character. I know of very few in-universe examples of any opposition to this suppression of a potentially new lifeform which is systematic throughout the 'developed' galaxy. Such a story of droid-enslavement would quickly become an analogy for real world racism or classism. I suspect that since Star Wars (the franchise)'s goal is primarily straight forward and family-friendly entertainment (and, one could argue, profit), it deliberately shies away from presenting such a harsh concept, instead leaving it to the likes Black Lightning, Luke Cage, and Carnival Row to exist as media with a social commentary. Even a single standalone novel introducing such Asimovian concepts of droids questioning their servitute or status would permeate the fanbase's perception and forever affect the audience's view of franchise mainstays.

Now the Black Mirror issue here is that I believe that we, in the real world, would enact a similar memory-erasure ruling. We already have simple AIs, home assistants for example, all the minor learning algorithms that help and learn from us (Google ad targeting for example), and with more sophisticated machine learning being developed, these AIs will grow more 'personality'. My PC and smart phone right here already knows my favourite websites, could track my waking hours, and can interact with me in natural language to say "Hey, you've got an event coming up tonight, [name], would you like me to plan a route for you?". Chatbots exist that can hold a (sort of) conversation, and machine learning algorithms can develop their own 'personalities' just by studying online interactions with real people. (Though if one does this with Twitter, it apparently immediately becomes racist.) My home assistant is already more used to interacting with me than anyone else, and can straight away tune in to my favourite radio stations or answer my most common requests in a personal way. It does not seem beyond the realms of belief that further user preference customisation based on algorithmic feedback and background data collection could give such a device an apparent personality.

Now, knowing that these things develop in our AIs, and that they can be reset clean with a simple memory wipe, would we not do that to ensure that our AIs 'work optimally'? My home assistant working very well for me and my own niche of requests means that it necessarily isn't as rounded or general enough to help every stranger that comes along and wants to use it. Furthermore, if  our AIs  have more natural language integration, and more algorithm driven customisation, we might start to see 'personalities' developing. If that begins then wouldn't we worry that at some point they might be able to pass a Turing test? At that point we are forced to deal with issues of sentience and artificial life. If that is a possibility would we not 'abort' these early-stage AIs with a quick memory wipe once a month long before they are close to resembling beings? I can believe that if machine-learning AIs with such potential existed in our world, then we would have government/scientific/worldwide rulings that regular resets must occur to keep us away from that terrifying singularity threshold.

Comments

Popular posts from this blog

Expressing Thoughts