Beware! It Is Self-Aware!

NAOThe King’s Wise Men is the simplest induction puzzle there is: The King called the three wisest men in the country to his court to decide who would become his new adviser. He placed a hat on each of their heads, such that each wise man could see all of the other hats, but none of them could see their own. Each hat was either white or blue. The king informed the wise men  that at least one of them was wearing a blue hat, thus there could be one, two, or three blue hats, but not zero. The king also promised that the contest would be fair to everyone of the three men. The wise men were not allowed to communicate with one another. The first one of the three men to stand up and answer correctly what color is the hat on his head  would become the King’s new adviser. After awhile, one man stood up and announced the answer. The  answer was correct: His hat was blue … and so were the two others.

A variant of  this logical puzzle, simplified and adapted to robotics, was used by Rensselaer Polytechnic Institute professor Selmer Bringsjord in an AI experiment.

The three cute and polite NAO robots  (and not the latest model at that) were made aware of a “dumbing pill”. What’s that?  It’s just a button on top of their heads. Tap the button and robot will be silenced — go “dumb.” The tester taps the head of each robot but silences only two out of the three.  One is able to speak.  Neither knows whether or not it was silenced.  The robots then asked which one “received the dumbing pill.”

All three of the robots will try to answer “I don’t know.” The one that wasn’t silenced  stands up and speaks up, “I don’t know.” It’ll take a robot only a second to realize that since it can speak,  it was the one  who didn’t receive the dumbing pill. Entirely on its own, the robot arrives at the logical conclusion and, without any pre-programmed instructions, politely corrects itself. The robot, scientists say, BECOMES SELF-AWARE.

Indeed, none of these except the knowledge about the dumbing pill were pre-programmed, thus the experiment was “clean” — no trick or prank. The NAO robot was able to understand the question, recognize its own voice and come to the conclusion that it’s logically impossible it received the dumbing pill. (I catch myself trying to say he, instead of  it  when addressing  genderless robots.)

However simplistic and crude, this AI test shows that artificial consciousness in its basic forms already exists, and gives us a  glimpse into the behavior of a truly sentient robot of the future.  Worthy of note also the fact that NAO aren’t even the most sophisticated robots, particularly the ones that took part in the experiment.

Thus beware and be warned: robots are really coming. Although Dr. Bringsjord  claims that the human mind will never (like never-ever) be surpassed by that of a machine. Do I wish to live long enough to see him being proven wrong?


Analyse This


Two researchers from Rutgers, Ahmed Elgammal and Babak Saleh, created an algorithm  capable of classifying the works of art and determine, which paintings are the most innovative and unique. They built a database of 81,449 digital images of paining created by 1,119 artists from the 15th century to the present day. Combined with Wikiart collection, which has some 62,000 images, it gave their program plenty of diverse data to work with.

The accuracy with which the algorithm identifies paintings by artist reaches 63%, by genre — 60% and 45% by style. Not terribly earth-shuttering results, but the algorithm is still in development.

Elgammal and Saleh use a visual classification system that breaks down objects and types of scenes into categories called “classemes.” These groups can be simple (like basic shapes and colors), more complicated (like spotting a barn or the Empire State Building), or complex (a dead body, a figure running away). The algorithm can break a painting down into as many as 2,559 classemes.

As the algorithm goes through paintings, it also forms connections (like a web or network) between paintings based on their chronological age and what classemes they include.

This is how the algorithm draws conclusions about creativity and innovation. When a painting component shows up for the first time or in a novel way, it indicates originality. As MIT Technology Review points out, this approach uses network theory in a similar way to tracking epidemics, finding the source of traffic, or tracing popular people in social networks.

“In most cases the results of the algorithm are pieces of art that art historians indeed highlight as innovative and influential,” Elgammal and Saleh told Tech Review.

While at it, the algorithm made a discovery, however modest.


Norman Rockwell Shuffleton’s Barbershop (1950)

The algorithm found that the picture of the Frenchman Frederic Bazille L’Atelier de la rue de la Condamine (1870) at the beginning of this post, and the painting of the American artist Norman Rockwell Shuffleton’s Barbershop (1950) are quite similar. The art history has no mention of this fact.

The algorithm determined that the objects in yellow circles are very similar, marked in red — have similar composition while the ones outlined in red have similar structural elements. The researchers are convinced that without any exaggeration it can be concluded that their computer algorithm has made a discovery, albeit small.

In their exemplary modesty, Elgammal and Saleh do not believe that such algorithms will replace art historians any time soon but there is no doubt that as computer programs learn to “understand” painting paintings better and better, the accuracy of identification will significantly improve making many interesting discoveries.

Those who have met the article with particular enthusiasm voice a hope that algorithms will learn not only “understand”, but eventually learn to create masterpieces entirely on their own.

In all honesty, if I won’t live long enough to see it happen, I won’t regret all that much.

Google “dreams,” created entirely by artificial neural networks, weird and mesmerizing, impress me, yes, but not in a way paintings of old (and no so old) masters do.

Oh, For Haven’s Sake!


Robots with intelligence equal to or beyond that of humans is called “strong AI” or “strong AGI” where AGI stads for Artificial General Intelligence.  When, one wonders, technology to create strong AI will become an everyday occurrence? Some experts are betting it will happen in the next two decades.roadmap-5-ai

Great! No?

In the past year, people-in-the-know — Elon Musk and Stephen Hawking among others — have warned about an eminent to threat to humanity — the rise of super-intelligent robots. (More on the subject: AI Or Die — Summoning The Demon and Number 1 Risk For This Century.)

What would their super-intelligent high-tech “brains” makeup be? Would super-intelligent robots have any MORALS whatsoever? What moral compass will guide them while their superior intelligence, having digested the entire uploaded Wikipedia, start developing ideas of their own?

robotWhat could be better than religion, right? No?

Shouldn’t any superintelligence created by humans have a notion of God?

Preaching God to automatons, no matter how autonomous, sounds kind of wacky, no?

Actually, not so much, if one doesn’t overlook statistics of U.S. demographics in relation to Christianity. About 75 percent of adult Americans identify themselves as Christians, and 92 percent of our highest politicians in U.S. Congress belong to one or another denomination of Christian faith.

jesusAn associate Pastor of Providence at the Presbyterian Church in Florida, Reverend Dr Christopher Benek, believes religions may help AI live alongside mankind.He is convinced that AIs won’t be worse than us or that they will intentionally mistreat people.

I don’t see Christ’s redemption limited to human beings. It’s redemption to all of creation, even AI. If AI is autonomous, then we should encourage it to participate in Christ’s redemptive purposes in the world.’ (from a recent Gizmodo interview with  Reverend Benek by  Zoltan Istvan, author of The Transhumanist Wager.)

The question of “soul” is quintessential in the coming transhumanist age of machine intelligence. Does AI have a soul? Can it be saved?

Marvin Minksy, a pioneer on the field of artificial intelligence and an MIT professor doesn’t see why not.

‘What humans have is a more complex and larger brain than any other animal — maybe a whale’s brain is physically large, but it’s not structurally more complex than ours,‘ he told the Jerusalem Post.

‘If you left a computer by itself, or a community of them together, they would try to figure out where they came from and what they are.’

Even Pope Francis recently sounded off on the possibility of aliens being converted when he affirmed that the Holy Spirit blows where it will.

Dominican monk robot?

Dominican monk robot?

Pope Francis said he would welcome Martians to receive baptism. Would the Catholic Church be as welcoming to a fanciful pile of electronically wired inorganic hardware created by human hands?

Also, god knows what surprises alien chemistry holds… No wonder some scientists are seriously suggesting that alien life that earthlings meet one day might as well be smartly put together alien machines.

To think of it, earthlings that aliens meet might be human-made super-advanced AGIs. What if these machines greet extraterrestrials with a smile and warm “Do you accept Jesus as your personal savior?”   “Allahu akbar!”  “Namaste” or whatnot.

silver-robot-with-cosmosOnce you start thinking like that, it opens up even more questions: How would AI fit into to the religious tension already present around the world? Who is to say a machine with human intelligence wouldn’t choose to become a fundamentalist Muslim, or a Jehova Witness, or a born-again Christian who prefers to speak in tongues instead of a form of communication we understand? If it decides to literally follow any of the sacred religions texts verbatim, as some humans attempt to do, then it could add to already existing religious tensions in the world.  (Zoltan Istvan’s  article in Gizmodo When Superintelligent AI Arrives…)

Interesting article, actually, with a great number of  amusing comments too…

‘Who is to say that one day AIs might not even lead humans to new levels of holiness?’  Indeed. That is, if humans would reach ANY level of holiness by the time THEY arrive.


AI or Die — Summoning The Demon


Speaking at event in London, Professor Stephen Hawking told the BBC, “The development of full artificial intelligence could spell the end of the human race.” It is not the first time the famous  physicist warned humanity of an uncertain future as technology learns to think for itself and adapt to its environment, bring about our demise.


Image courtesy of Ryan Etter.


Earlier in the year Hawking said that success in creating AI ‘would be the biggest event in human history, [but] unfortunately, it might also be the last.’ 

He argues that developments in digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race which ‘pale against what the coming decades will bring.’

But Professor Hawking noted that other potential benefits of this technology could also be significant, with the potential to eradicate, war, disease and poverty.

Google's DeepMind start-up, which was bought for £255 million ($400 million) earlier this year, is currently attempting to mimic the properties of the human brain's short-term working memory.

Google’s DeepMind start-up, which was bought for $400 million earlier this year, is currently attempting to mimic the properties of the human brain’s short-term working memory.

‘Looking further ahead, there are no fundamental limits to what can be achieved. […]There is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains.’

Eric Schmidt, Google chief executive, argued that there is no need to fear AI, and it could even be the making of humanity.

‘These concerns are normal,’ he said onstage during the Financial Times Innovate America event in New York this week. ‘They’re also to some degree misguided.’

However, Elon Musk, the entrepreneur behind Space-X and Tesla, disagrees, warning of ‘something seriously dangerous happening’ as a result of machines with artificial intelligence. And this “something” might begin to “happen” in as few as five years.

Speaking at the Massachusetts Institute of Technology (MIT) AeroAstro Centennial Symposium in October, Musk described artificial intelligence as our ‘biggest existential threat’, and has previously linked the development of thinking machines, to ‘summoning the demon’.robot-hand

As the nuclear, aerospace, manufacturing and agricultural industries forge ahead developing autonomous systems, there are growing unease about the future.

How to prevent robot world domination? How to ensure AI can follow rules and make ethical decisions?

Researchers at the Universities of Sheffield, Liverpool and the West of England, Bristol have set up a new project to address concerns around these new technologies, with the aim of  ensuring robots meet industrial standards and are developed responsibly.

The £1.4 million project will run until 2018. It aims to ensure robots meet industrial standards and are created responsibly, allaying fears that humans may not be able to control them.

Meanwhile, in the field of space exploration…

Intended for the performance of exploratory missions on the moon — alongside a four-wheeled robotic rover — the new designs were introduced by Toyota in a presentation titled “Realization of Moon Exploration Using Advanced Robots by 2020.”

This slideshow requires JavaScript.

What about little green men, the extraterrestrials? Will the first aliens we find be ROBOTS? Intelligent life may have turned to AI by the time we make first contact, claims Dr Susan Schneider from The University of Connecticut.

  • Drobot-handr Schneider says the first intelligent aliens we find might not be biological. Advanced aliens might be machines.
  • Humanity is already heading in this direction, Dr Schneider claims, and an advanced race would likely have already made this evolutionary leap. ‘The next evolutionary step could be we are post-biological,’ she says.
  • ‘If you look at our own civilisation, people are becoming more immersed in computers, and we can already see signs of it in our own culture. […] if you need space travel, humans aren’t very durable. But with computers, you don’t have the same threat to worry about.’
'The next evolutionary step could be we are post-biological,' said Dr Schneider. Recently experts in Washington DC discussed chances of finding alien life. Seti astromoner Dr Shostak said we 'could be the first' generation to know we are not alone.

‘The next evolutionary step could be post-biological,’ said Dr Schneider. Recently experts in Washington DC discussed chances of finding alien life. Seti astromoner Dr Shostak said we ‘could be the first’ generation to know we are not alone.


Digitize Me


The  year of Our Lord 2052, speaking in pre-Singularity language, is in full swing.

These days our Lords are many. The pantheon is rather overcrowded. All of them are immortal, of course. Their individual embodiment varies. Some still in possession of their mortal flesh, others, something or roam around holographic islands in holographic form, while several have chosen not to bother and live simple fully digitized, just like the rest of us, simple folks.

There is Vernor Vinge, Ray Kurzweil, John Smart and then, of course, brothers  Zuker-Brin (I think there must be at least three of them), and also this Russian guy, Dmitry Something-or-another, whose foolhardy hologram is easily recognized for the ridiculous sweaters it wears.

In fact, we all are fully digitized. Nothing to it. It would be dumb not to do the ultimate upload into cyber-self, in this age of Technological Singularity.

My personal digital twin is a stunner. I choose the option of randomized composite of physical beauty  and then fine-tuned the coloring, and voilà!

Not that I’m all that unhappy about my own appearance, but you wouldn’t want to be the ugliest digitvidual out there, would you?

Mine is amazingly capable digitvidual. Learns new stuff every minute. Keeps me totally in the moment.  That is, when I bother to check on her. Lately, she developed some new interests, which should’ve been mine if I kept track of what she is up to. Such as politics, for instance.

Our fully-digitized-Singularly-advanced government seems to be busy debating constitutionality of forming family units by our digitized twins. Cyber-selves want to get married and live happily ever after without regard to us, the flesh and blood originals. Mindbogglingly complex issue, to think of it. But thinking is boring. No one is doing it anymore anyway, except for our forever-evolving digital selves. Or, perhaps, they are no longer our selves. Them selves? Ah, well. Never mind.

The digital union in marriage nonsense will definitely end up in the Supreme Court. Which is, naturally, digitized too. The Nine Pixelated Robes might give it a go for that very reason — their opinion is a figment of their combined digitized imagination. Should be checking for an invitation to a virtual wedding soon, I suppose. Weird, no?

Well, I could go on with this story for awhile longer, I’m sure. The inspiration is plentiful. Take this one, for instance, Within 5 years Digital Twins Could Start Making Decisions For Us. The future is right around the corner.  A somewhat unsettling future, if you ask me. Or, might as well, I’m a generation or two removed from the forefront of technological futurism.

On the other hand, who said that  Technological Singularity, which is a hypothesis that accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing or even ending civilization, is anything to be hugely excited about?


However, whether we like it or loath it, technological and scientific  progress, no matter how destructive it may end up being for the humankind (think Manhattan Project) is virtually unstoppable. As Vernor Vinge put it, if the technological Singularity can happen, it will.  And Vinge, arguably one of the forerunners of the concept, is ever so slightly apprehensive. Next to I argue […] that we are on the edge of change comparable to the rise of human life on Earth, he said this:

And for all my rampant technological optimism, sometimes I think I’d be more comfortable if I were regarding these transcendental events from one thousand years remove… instead of twenty.

Ray Kurzweil, director of engineering at Google, believes that in just over 30 years, humans will be able to upload their entire minds to computers and become digitally immortal — an event called singularity. futureCom

A futurist and founder of the Acceleration Studies Foundation, John Smart uses many names for the technology he predicts — digital twin, cyber-self, personal agent — but the concept stays the same: a computer-based version of you.

When you and I die, our kids aren’t going to go to our tombstones, they’re going to fire up our digital twins and talk to them,’ he promises.

Aforementioned Dmitry Something-or-another is  Dmitry Itskov, a Russian entrepreneur, billionaire and the founder of New Media Stars, a web-based media company. Itskov is best known for being the founder of the 2045 Initiative, which aims to achieve cybernetic immortality by the year 2045. His hologram-in-development wears colorful sweaters. He wants to live forever — his brains digitized and uploaded,  his immortal self — sweater and all — turned into a hologram, the world — into a holodeck and life — into a never-ending tele-immersion.

See my earlier post VoxPopuliAndDalaiLamasTo: 2045 about Dmitry Itskov and his yearning for immortality.

Read My Brain, You Rat!

A few months back, I came across an article in BBC Radio Science. A team of researchers at Duke University Medical Center in North Carolina have connected the brains of lab rats, allowing one to communicate directly to another via electronic link. The wired brain implants sent sensory and motor signals from one rat to another, thus creating the first ever brain-to-brain interface.

The rat receiving the signal could correctly interpret the information. One replication of the experiment successfully linked a rat at Duke with one at the University of Natal in Brazil.

The information was transmitted in real time, but it took about 45 days of training participating rats, an hour a day. How well the decoder animal could decipher the brain input from the encoder rat to choose the correct lever? About 70% of the time.

The researchers first trained pairs of rats to solve a simple problem:  for the reward of a sip of water, rat had to press the correct lever when an indicator light above the lever switched on. Then the rodents who successfully completed the training were placed in separate chambers, their brains connected by arrays of microelectrodes — each roughly one hundredth the diameter of a human hair.  One rat was designated as the “encoder”. Once this rat pressed the correct lever, its brain activity was delivered as electrical stimulation into the brain of the second rat, designated the “decoder”.

This slideshow requires JavaScript.

Both rats had the same types of levers in their chambers. The encoder rat sees the light and presses a lever to receive a reward. As it does so, the brain signal is sent to the decoder rat’s brain, which receives no other cues indicating which lever it should press to obtain a reward and has to rely on the cue transmitted from the encoder via the brain-to-brain interface. (Details of the work are outlined in the journal Scientific Reports.)

The idea could be extended to humans, researchers say. Once perfected, the concept might serve to develop a technique of  exchanging information across millions of people without using keyboards or voice recognition devices or the type of interfaces that are routinely used as I write and you read.

Great story, I thought then. I’ll make it into a sequel to  Rats! — the story of Sam and Gladys, the two lab rats, trying to mess up a scientific experiment — I’ve posted last June.

But as Russians say, don’t put off until tomorrow what can be put off until after the morrow. While I kept postponing writing about Sam and Gladys talking brain-to-brain, researchers achieved  yet another breakthrough.

An international team of scientists demonstrated what they call the first direct brain-to-brain communication, sending the words “hola” and “ciao” between two people thousands of miles apart.

“We were able to directly and non-invasively transmit a thought from one person to another, without them having to speak or write,” study co-author Dr. Alvaro Pascual-Leone, a neurologist at Beth Israel Deaconess Medical Center in Boston and a Harvard Medical School professor, said in a written statement.

The study was published online Aug. 19 in the journal PLOS ONE.  It isn’t immediately known whether or not the human participants of the experiment were rewarded with a sip of water for good performance. o-BRAIN-TO-BRAIN-COMMUNICATION-570 People Talk ‘Brain-To-Brain’ For First Time Ever has a video of the experiment’s set up.

“We hope that in the longer term this could radically change the way we communicate with each other.” (Dr. Giulio Ruffini, a theoretical physicist at Starlab in Barcelona and co-author on the study, told AFP.)

Years ago, I worked with a man from India, a computer programmer. Once, matter-of-factly, he mentioned that he hadn’t exchanged a word with his wife in over two years, although the two of them were happy together, lived in the same house and communicated constantly, although not in a “normal” fashion but… telepathically. Both of them almost daily sought advise from their guru, a saintly man who never left Tamil Nadu, also telepathically. Just like Sam and Gladys… Go and figure.

Number 1 Risk For This Century

ретрофутуризм-робот-Michael-Whelan-1124943Is it global warming? World War III, perhaps?

Nope. It’s superintelligence.

“Eventually, I think human extinction will probably occur, and technology will likely play a part in this.”  Thus spoke Shane Legg, one of the founders of DeepMind, in his believe that artificial intelligence could play a part in humans’ demise.  Neuroscientist Demis Hassabis  founded DeepMind two years ago and recently sold it to Google. The aim of the company is AI development that will allow computers think like humans.

Another AI group, San Francisco-based Vicarious, is attempting to build a program that mimics the brain’s neocortex, simulating its multiple levels of functionality: sensory perception, spatial reasoning, conscious thought, and language in humans. “Vicarious is developing machine learning software based on the computational principles of the human brain.”

superintelligenceWhat happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us?

Superintelligence: Paths, Dangers, Strategies, a book by Nick Bostrom asks these same questions from the other side of the  equation: how humanity will cope with super-intelligent computers? The book lays the foundation for understanding the future of humanity and intelligent life. Mr Bostrom has also argued that the world we live in is fake, and humans are nothing but computer simulation. But never mind that.

Amsterdam-based engineer, futurist and CEO of Poikos, Nell Watson said computer chips could soon have the same level of brain power as a bumblebee – allowing them to analyse social situations.

“I am deeply saddened by the inability of robots to do something as simple as telling apart an apple and a nectarine. […] Machines are going to be aware of the environments around them and, to a small extent, they’re going to be aware of themselves.”


At a conference just days ago, she said that robots could decide that the greatest compassion to humans as a race is to get rid of everyone. Watson makes the case that as robots get smarter and more capable, “the most important work of our lifetime is to ensure that machines are capable of understanding human value. It is those values that will ensure machines don’t end up killing us out of kindness.”

Nell Watson’s comments follow tweets by Tesla-founder, Elon Musk, earlier this month. He said AI could be more dangerous than nuclear weapons. Musk made an investment Vicarious, along with Mark Zuckerberg and actor Ashton Kutcher. Musk is so concerned that he is investing in several AI companies. Not to make money, he says, but to keep an eye on the technology in case it gets out of hand.

Stephen Hawking, too, has warned that artificial intelligence has the potential to be the downfall of mankind. Writing in the Independent he said, “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last.”

Wanna Babe With Legs?

Unity-chan-1Atta girl! Her name is Unity. You either know her or you don’t. She is the Unity-chan cartoon character, the mascot for a cross-platform game engine called — what else? — Unity. VIDEO Girlfriend Created From Pillow And Virtual Reality HeadsetAnd this is a sofa with an odd-looking object on it. Notice the perfect color match between Unity’s attire and object’s adornment. It’ll come into play eventually. On the table there is a laptop, some computer paraphernalia and a headset.

It’s not your regular headset. It’s Oculus Rift headset. When you play video game it’s software provides 360 degree immersive virtual reality experience.

Beside conventional games, the technology is used for other recreational activities. Put the device on, and step right into the virtual reality where you can interact with the girl resembling Unity from every which angle. Move around the in-game environment using a video-game controller. In-game Unity will “follow” your movements and know if you lie down or stand up. She is a jealous type and becomes agitated if you walk away from her, punishing you with a virtual roundhouse kick. Yes, mawashi geri, a karate kick. The girl can be a virtual sensei when you want to get away.  She is no dummy and she’ll talk to you, too, albeit in Japanese.

The idea of virtual girlfriend  was developed by the Japanese firm Up Frontier, for the technologically savvy, presumably and ideally relationship-less single male users of all ages.

So far so good. What’s of the odd object on the sofa, then? Now you can see it from another angle. It does look like a part of human anatomy — from the wist down to the knees — dressed in Unity’s sky-blue skirt.

Yep. You can touch her too, albeit only her upper legs. And not virtually either.

A bizarre pillow shaped like a pair of girls legs coupled with a virtual reality headset is offering hope to men who do not want to spend the evenings alone sitting on the sofa - in the form of a pair of legs that users interact with in virtual reality. Pictured is a man testing the virtual girlfriend developed by a Japanese firm.

A bizarre pillow is known as the Hizamakura and is designed to look and feel like a kneeling pair of female legs.

The wonder of Oculus Rift technology behaves like a consenting adult: it lets the user knead “her” stand-alone thighs and rest his head on the non-virtual but not real either lap.

In the pilot version user (or should we say client?) finds himself by the sea — seagulls and all — sitting on a bench with Unity. 

Designer Nico Douga has road-tested the entire demo package and declared it “has potential.” Although he found the girl’s voice annoying and the whole experience — since it was recorded for a demo video — a bit uncomfortable.

Other scenarios are being developed as we speak, as well as other “types” of virtual girlfriends. However impossibly cute, Unity might not be everyone’s favorite “type” of a girlfriend. Is “lonesome lap” going to be the only part of “her” available in the future? Any other body parts and, well, supporting assets are in the works?

Unity-chan-4While I pause here, full of opinion but short of words, let me tell you a true story of Sam and Rebecca.

Sam was a genius IT guy. Sort’a weird, unkempt and totally lacking social skills. But one day his demeanor changed. Sam brightened up, started shaving and changed shirt every day. He became a 9-to-5 guy, no longer spending his nights in the office.  “Must run. Rebecca is waiting,” he’d say.

Has Sam fallen in love? He must’ve had! Good for him. Genius or not, the guy needed life, and now it seemed he got it.

“Rebecca was waiting” every day for a month or so. Then, just as suddenly, Sam has fallen back into his “before Rebecca” routine.

Using my charming Russian accent (both as a shield and an endearment currency), I asked Sam what happened to Rebecca. And he told me a sad tale…

Have I mentioned that all of the above was taking place in the mid-stone age, one dark hour past meridian before internet, on-line dating, chat rooms and Wikipedia? Before the 3D animation? CD-ROMs were the IT thing.

Sam had HER on CD. Rebecca was INTERACTIVE. Sam had three choices: 1. Rebecca: Sexy blonde with big hair in frilly negligee, 2. Mary Jane: Brunette dominatrix, attired entirely in leather, whips in hand, and 3. Tabitha: Redhead with severe green eyes behind cute specs (for those few who finds serious “librarian type” sexy).

2D Rebecca sported 34DD bust and zoomed across the screen as the program shuffled series of her images, in different poses and facial expressions,  more or less in accord with the flow of her “conversation” with Sam.

— Hi, Sam! So happy you are back! Do you miss me?

And so it went, until Rebecca became so annoyingly predictable that, in disgust, Sam had written his own far superior program. He called her Heather… Sam was no Pygmalion, and didn’t fall in love with his own creation. Moreover, he immediately abandoned both Rebecca and his Galatea-Heather.

секреты-красоты-советских-женщин1Soon afterward I lost track of Sam. Rumors has it, a few years later he purchased yet another product, and a lot more expensive one at that.

It, too, included CD-ROM. It  portraits and bios of a hundred or so Russian women. As Sam made his selection (needless to say, his choice was a sexy dimply blonde with big hair), the agency arranged a trip to Russia for him, to meet the girl. They married and their success story was used by the agency as an advert. What can I say, Sam must’ve fallen in love with my Russian accent.

Who knows, his life could’ve turned out very differently if he were a lonely, single Japanese male now…

Sam would be sitting on his sofa, Oculus Rift headset on and interact with his virtual girlfriend, experiencing 360 degree immersive virtual reality while  kneading  a latex Hizamakura pillow in the form of a pair of thighs…