قالب وردپرس درنا توس
Home / Technology / You can not see Tedlexa, Internet of Things / AI carrying off your nightmares

You can not see Tedlexa, Internet of Things / AI carrying off your nightmares



Tedlexa, an IoT-stuffed bear.
Enlarge / Alexa, how do I create something that combines AI with a scary 1980s toy?

Update, 1/2/21: It’s New Year’s weekend, and Ars employees are still enjoying some downtime to prepare for a new year (and we’m sure a number of CES emails). As it happens, we̵

7;ve some old Ars stories like this 2017 project from Ars Editor Emeritus Sean Gallagher, who created generations of nightmare fuel with just a nostalgic toy and some IoT equipment. Tedlexa was first born (incorrect, documented in writing) on ​​January 4, 2017, and the story is shown unchanged below.

It’s been 50 years since Captain Kirk first spoke commands to an unseen, omniscient computer Star Trek and not so long ago David Bowman was serenaded by HAL 9000’s rendition of “A Bicycle Built for Two” in 2001: A Space Odyssey. While we have been talking to computers and other devices for many years (often in the form of explosive interjections), we are only now beginning to scratch the surface of what is possible when voice commands are connected to artificial intelligence software.

In the meantime, we’ve always fantasized about talking toys, from Woody and Buzz in Toy Story to the scary AI teddy bear who tagged along with Haley Joel Osment in Steven Spielbergs AI (Well, maybe people do not dream of that teddy bear.) And ever since Furby Mania, toy manufacturers have been trying to make toys smarter. They have even connected them to the cloud – with predictable mixed results.

Naturally, I decided it was time to push things forward. I had an idea to connect a speech-driven AI and the Internet of Things to an animatronic bear – all the better to stare into the lifeless, occasionally blinking eyes of the singularity itself. Ladies and gentlemen, I give you Tedlexa: a gutted 1998 model of the Teddy Ruxpin animatronic bear tied to Amazon’s Alexa Voice Service.

We present Tedlexa, the personal assistant for your nightmares

I was by no means the first to bridge the gap between animatronic toys and voice interfaces. Brian Kane, an instructor at the Rhode Island School of Design, threw down the gauntlet with a video by Alexa linked to the other servo-animated icon, Billy the Big Mouthed Bass. This Frakenfish was all powered by an Arduino.

I could not leave Kane’s hack unanswered, having previously explored the eerie valley of the Bearduino – a hardware hacking project by Portland-based developer / artist Sean Hathaway. With a hardware-hacked bear and Arduino already in hand (plus a Raspberry Pi II and various other toys at my disposal), I set out to make the ultimate talking teddy bear.

To our future robo overlords: please forgive me.

His master’s voice

Amazon is one of a pack of companies struggling to connect voice commands to the enormous computing power of the “cloud” and the ever-growing Internet of (Consumer) Things. Microsoft, Apple, Google and many other challengers have tried to connect voice interfaces in their devices to an exponentially growing number of cloud services, which in turn can be linked to home automation systems and other “cyber-physical” systems.

While Microsoft’s project Oxford Services has been largely experimental, and Apple’s Siri remains tied to Apple hardware, Amazon and Google have run hard into a battle to become a sedentary voice service. As ads for Amazon’s Echo and Google Home have saturated broadcast and cable television, the two companies have simultaneously begun to open affiliate software services to others.

I chose Alexa as the starting point for our descent into IoT hell for several reasons. One of them is that Amazon allows other developers to build “skills” for Alexa that users can choose from a marketplace, such as mobile apps. These skills determine how Alexa interprets certain voice commands, and they can be built on Amazon’s Lambda application platform or hosted by the developers themselves on their own server. (Rest assured, I’ll be doing some future skills work.) Another point of interest is that Amazon has been quite aggressive in getting developers to build Alexa into their own gadgets – including hardware hackers. Amazon has also released its own demo version of an Alexa client for a variety of platforms, including the Raspberry Pi.

AVS, or Alexa Voice Services, requires a relatively small amount of data at the end of the user. All voice recognition and synthesis of voice responses takes place in the Amazon cloud; the client only listens for commands, registers them, and forwards them as an HTTP POST request that carries a JavaScript Object Notation (JSON) object to the ACP-based interface. The voice responses are sent as audio files to be played by the client, wrapped in a returned JSON object. Sometimes they include a handover of streamed audio to a local audio player, as with AVS’s “Flash Briefing” feature (and music streaming – but it’s only available on commercial AVS products right now).

Before I could build anything with Alexa on a Raspberry Pi, I needed to create a project profile on Amazon’s developer page. When you create an ACP project on the site, it creates a set of credentials and shared encryption keys that are used to configure the software you use to access the service.

Once you get your ACP client up and running, it needs to be configured with a Login With Amazon (LWA) token through its own setup website – it provides access to Amazon’s services (and potentially Amazon payment processing). So I really wanted to make a Teddy Ruxpin with access to my credit card. This will be a topic for future security research on IoT on my part.

Amazon is offering developers an example of Alexa clients to get you started, including an implementation that will run on Raspbian, the Raspberry Pi implementation of Debian Linux. However, the official demo client is largely written in Java. Despite, or perhaps because of, my previous Java experience, I was very tired of trying to make any connection between the sample code and the Arduino-powered bear. As far as I could determine, I had two possible actions:

  • A hardware-focused approach that used the audio stream from Alexa to power the bear’s animation.
  • To find a more accessible client or write my own, preferably in an available language like Python, which can run Arduino with serial commands.

Of course, as a software-focused guy and have already done some significant software work with Arduino, I chose … the hardware route. Hoping to overcome my lack of experience in electronics with a combination of internet search and raw enthusiasm, I grabbed my soldering iron.

Plan A: Sound in, servo out

My plan was to use a splitter cable for the sound of the Raspberry Pi and to drive the sound both to a speaker and to the Arduino. The audio signal will be read as an analog input by Arduino, and I will somehow convert the volume changes in the signal to values ​​which in turn will be converted to digital output to the servo in the bear’s head. The elegance of this solution was that I could use the animated robobar with any sound source – which led to hours of entertainment value.

It turns out that this is the approach Kane took with his Bass-lexa. In a telephone conversation, he revealed for the first time how he pulled off his talking fish as an example of rapid prototyping for his students at RISD. “It’s about doing it as quickly as possible so that people can experience it,” he explained. “Otherwise, you end up with a big project that does not get into people’s hands until it’s almost done.”

So, Kane’s quick prototype solution: to connect a sound sensor that is physically taped to an Amazon Echo to an Arduino that controls the engines that drive the fish.

Kane sent me this picture of his prototype - sound sensor and breadboard taped up on an Amazon Echo.
Enlarge / Kane sent me this picture of his prototype – sound sensor and breadboard taped to an Amazon Echo.

Brian Kane

Of course, I knew nothing of this when I started my project. I also did not have an echo or sound sensor at $ 4. Instead, I stumbled around the internet and looked for ways to connect the Raspberry Pi’s audio connector to the Arduino.

I knew that audio signals are alternating current and form a waveform that drives headphones and speakers. The analog pins on the Arduino can only read positive DC voltages, so in theory the negative value peaks in the waves will be read with a value of zero.

I got false hope from an instructor I found who moved a servo arm to the beat of music – simply by soldering a 1000 ohm resistor to the ground on the audio cable. After watching Instructable, I began to doubt my health a little, even as I boldly progressed.

While I watched data from the audio cable stream in via test code running on the Arduino, it was mostly zeros. So after taking the time to review some other projects, I realized that the resistance attenuated the signal so much that it barely registered at all. This turned out to be a good thing – to make a direct update based on the approach that the Instructor presented would have put 5 volts or more in Arduino’s analog input (more than double the maximum).

Getting the only Arduino approach to work will mean making an extra run for another electronics supply store. Unfortunately, I discovered that my visitor, Baynesville Electronics, was in the final stages of sales and was out of stock. But I pushed myself forward and had to procure the components to build an amplifier with DC offset to convert the audio signal into something I could work on.

It was when I started trading oscilloscopes that I realized that I had ventured into the wrong bear branch. Fortunately, there was a software response waiting for me – a GitHub project called AlexaPi.




Source link