This is indeed a very interesting task. I had so much fun creating this microblogging using predictive text on my phone. Please check out the 1-minute video below to see how this microblog is being created. Hope you find it amusing because it certainly makes me laugh.
I have chosen the prompt “Every time I think about our future”. Initially, I thought I was going to expand the conversation from career or societal perspectives. Clearly, the algorithms didn’t think that way, it lead my following text to a completely different path. It doesn’t make sense to me how the predictive text is being generated? The content being generated is not what I intended to type or communicate with my audience.
Artificial Intelligence is not sophisticated enough yet to detect cultural nuances, or pick up the tone from a sensitive conversation. This prompts me to think about how people find virtual assistants frustrating, for instance, Siri, Google Assistant, and Xiaomi Voice Assistant. I recall my experience of using Siri, so far it could only manage very simple and repetitive tasks like messaging mom “I will be home in 10 mins”, or “set my alarm to 8AM tomorrow”. However, if I’m asking a moral question or a more complicated question that requires cognitive thinking to Siri, I will likely be disappointed. How many times have you heard “sorry, I don’t think I understand what you mean”, sounds familiar?
In the fictional world, this movie Her talks a story about a man who fell in love with an AI virtual assistant Samantha. If you have watched that movie, you would be amazed at how incredibly Samantha is good at communicating. Compared with our current situation with AI technologies, we have not yet reached incredibility. But we have been witnessing the changes in how people view and interact with AI.
In China, AI girlfriends are rapidly gaining popularity among 600 million users, mostly men from lower socio-economic backgrounds (Seah, 2021). The company that develops this persona called Xiaoice is also using machine learning and algorithms to provide financial analysis, content production, and virtual assistants for third-party platforms. You will not believe this emerging technology is so lucrative, and it achieves over 100 million yuan ($15 million) annual revenue. This short video will give you a quick explanation of what is an AI girlfriend and why is it becomes so popular.
Likewise, Japan is also facing a social dilemma while the birth rate continues to drop down, but men and women are more drawn to virtual relationships with AI boyfriends or girlfriends. By 2045, Japan’s total population will drop to about 102 million (iNews, 2021). Imagine how that’s going to change the societal structure in Japan. Seemingly, the workforce will be negatively impacted by insufficient labor. In the long run, it may likely impede Japan’s overall economic development.
Going back to Data Scientist Dr. Cathy O’Neil’s (2016) comment about how to regulate AI. For the purpose of safety, fairness, and discrimination. She wisely suggested
- to build safe algorithms
- to scrutinize these algorithms
- to monitor these algorithms
- to audit them
All to limit illegal algorithms and unethical algorithms. Shannon Vallor also cries out for a moral framework for interacting with AI (Santa Clara University, 2018). Instead of being completely profit-driven, I think AI companies should aim to develop AI technologies with an ethical and unbiased mindset of how their technologies are helping with social development. Listing a couple of scenarios:
- how AI could limit the spread of COVID-19 pandemic
- how AI could facilitate solving social problems, like eliminating poverty, and developing clean energy
- how AI could help to deliver education to developing countries with limited resources and politically disadvantaged nations
References
Seah. J. (2021). AI girlfriends are holding China’s and Japan’s men in Thrall. Retrieved from here.
iNews. (2021). Behind Japan’s countless “realistic” robot girlfriends, there are opportunities for the Chinese. Retrieved from here.
Santa Clara University. (2018). Lessons from the AI Mirror Shannon Vallor.
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy (First edition). New York: Crown.
PamelaChadwick
November 29, 2021 — 11:44 am
Hi Vera,
Thank you so much for the valuable content and video it really resonated with me. I couldn’t agree with you more when you said “Instead of being completely profit-driven, I think AI companies should aim to develop AI technologies with an ethical and unbiased mindset of how their technologies are helping with social development.” and I particularly like your suggested scenario of facilitating solving social problems. This is a big interest of mine. I get very frustrated by the fact that so much time and energy goes into making peoples’ lives easier and more efficient for those who already have all of their needs satisfied. How are we not doing more to develop sustainable resources for those whose basic needs are not met or for those whose current ways of being are unsustainable?