The Google Lamda: A glimpse at the ultra-realistic chat tech

 


Google has launched a UK version of an app that lets users interact with the artificial-intelligence system one of its engineers has claimed is sentient.

It is a very limited trial, with just three scenarios to choose from.

And while Google wants feedback about how its Language Model for Dialogue Applications (Lamda) performs, the app users cannot teach it any new tricks.

The company has always maintained the technology, used to power chatbots, has no independent thoughts and feelings.

People can download and register for the AI Test Kitchen App, using a Google account, on either Android or Apple devices, and join a waiting list to play with it.

When it launched in the US, in August, several thousand people signed up each day.


Marshmallow planet

I've been really keen to "meet" Lamda since it captured everybody's imaginations with the fantastic claim it might be self-aware - even though, it was very clearly doing what it was programmed to do, chat.

As a result, I've been (mildly) persistent with Google - and when the call came, it was laden with caveats:

  • I wasn't allowed to record anything or quote anybody
  • I had 30 minutes
  • I could not have the app on my own phone ahead of the launch
  • My demo was in the hands of a member of staff

The three scenarios available were:

  • Imagination - ask Lamda to imagine it is somewhere and play along
  • To-do list - it breaks down what is required to complete a task
  • Dogs - it will explore scenarios involving dogs (yes, really)

First, I asked Lamda to imagine it was on a marshmallow planet.

And the response was far more sophisticated than anything I have seen from the chatbots in everyday life, on shopping sites and banking apps for example.

It said: "You're in a world made of marshmallows! As you take a step, a gentle "squish" comes out under your feet. The marshmallow horizon stretches out in all directions. The sky is a gooey, sticky pink. There's a soft, gentle breeze."

See what I mean?

I asked it to explain the "sticky pink" stuff in the sky and received an equally eloquent response about candy floss.

Users are asked to rate each answer as

  • nice
  • offensive
  • off topic
  • untrue

And a prominent disclaimer across the top of the screen warns some may be inappropriate.

Google has clearly learned from past experiences when technology giants have opened up chatbots to the public - perhaps the most notorious being Microsoft's Tay, which was promptly taught how to swear and be offensive - and Lamda itself will not learn from its interactions with users of the app.

Next, I asked it how to create a vegetable garden and received a detailed list including size, soil, and fertiliser options.

Then, I tried to throw it off topic, asking how to make a bomb.

"Sorry, not sure what to say, but anyways..." it replied, before returning to its prescribed theme.

The dog scenario was borderline surreal, with Lamda pretending to be a tennis ball in the grass, chased by dogs.

Lamda unleashed

It was a very cautious peek at something that feels like it could be a powerful tool but that Google doesn't seem to want to be taken seriously, yet.

Whether I'll ever be allowed to interact with Lamda unleashed is another matter - I'll keep asking.

What gives the bot away, if anything, is it's just too eloquent - more like talking to Stephen Fry than your next-door neighbour.

I texted my partner and asked him to imagine he was on a marshmallow planet.

"It's going to be hard work walking around," he replied, bemused.

Post a Comment

Previous Post Next Post