The maker of some of our favorite light strips and smart plugs is discounting them on Amazon today
- Deals like these are perfect for those of you who are building up your smart homes — buying smart switches en masse can get expensive quickly, but with discounts like these, it makes buying multiple devices a little easier.
- The first discounted device is the Koogeek Wi-Fi socket, which is compatible with Amazon Alexa, Google Assistant, and Apple's HomeKit, so you should be able to easily integrate it with your existing smart home devices.
- Normally, the Koogeek Wi-Fi Socket costs $29.99, but using the coupon code VL5N8YYE, you can get the device for $19.97 on Amazon.
- Next up is a two-pack of the Koogeek Mini smart plugs, which are also compatible with Alexa and Google Assistant, but not HomeKit. The plugs don't have energy monitoring, but they're a fair bit cheaper.
- The device works with Alexa, Google Assistant, and HomeKit, and it will add beautiful accent lighting to any environment in your home.
How to Use Interceptors to Simplify Handler Code and Cache Product and Purchase Information in Monetized Alexa Skills
- In this blog, I’ll use an interceptor to fetch and cache details about in-skill products for a monetized Alexa skill.
- Your skill code should examine the request payload and if it finds the new session flag, then it’s a new session and it should fetch the data from AMS.
- When a request initiates a new session, the interceptor calls AMS and caches the data in a session attribute.
- When returning from the purchase flow (initiated by the Connections.SendRequest directive and results in a Connections.Response event being sent to your skill), the Connections.Response event request payload has the session.new attrbitue set to true.
- Since a customer might have purchased a new product, this is definitely an appropriate time to call AMS and get fresh data.
Two New Papers Discuss How Alexa Recognizes Sounds
- Like many machine learning models in the field of spoken-language understanding, ours uses recurrent neural networks (RNNs).
- In the second of our two contextual models, a high-level RNN (red circles) receives inputs from one layer of a pyramidal RNN (groups of five blue circles), and its output passes to the next layer (groups of two blue circles).
- One popular and simple semi-supervised learning technique is self-training, in which a machine learning model is trained on a small amount of labeled data and then itself labels a much larger set of unlabeled data.
- We trained neural networks on all three data sets and saved copies of them, which we might call initial models.
- Of course, using six different models to process the same input is impractical, so we also trained a seventh neural network to mimic the aggregate results of the first six.
Musicplode Media Uses In-Skill Purchasing to Turn Its “Beat the Intro” Voice Game into a Hit for Alexa Customers
- “And with over 100 million Alexa-enabled devices out there, it gave us an opportunity to grow our brand, engage more customers, and monetize our efforts by offering premium content that customers love." Beat the Intro tests Alexa customers’ music knowledge with a variety of free gameplay rounds.
- “Techniques like that make Beat the Intro stand out from other skills.” Offering Premium Content Ups Gameplay Even Further Given its popularity, Beat the Intro quickly started earning money through Alexa Developer Rewards, a program that pays developers for eligible skills with some of the highest customer engagement.
- “The combination of Alexa, voice-first games, and in-skill purchasing gives companies like ours the ability to create engaging voice-first games, reach more customers than ever, and build a sustainable revenue stream for their voice business,” says Brown.
Google’s SpecAugment achieves state-of-the-art speech recognition without a language model
- Google AI researchers are applying computer vision to sound wave visuals to achieve state-of-the-art speech recognition system performance without the use of a language model.
- SpecAugment was applied to Listen, Attend and Spell networks for speech recognition tasks to achieve 2.6 percent word error rate (WER) with LibriSpeech960h, a collection of about 1,000 hours of spoken English, and 6.8 percent word error rate with Switchboard 300h collection of 260 hours of telephone conversations in English.
- Advances in language models and compute power have driven reductions in word error rates that in recent years, for example, have made typing with your voice faster than your thumbs.
- The achievement was detailed in “SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition,” a paper published on arXiv on April 18.
Here Are the Alexa Skills Nominated for the Webby Awards—And You Can Help Pick the Winners
- Today, we want to acknowledge the Alexa skills and congratulate their developers who’ve earned nominations for a Webby Award in one or more of the new voice categories.
- We encourage you to explore all of the nominated skills, then head over to the Webby Awards People’s Voice before April 18 to cast your vote for the very best of the internet.
Announcing the Winners of the Alexa Skills Challenge: Multimodal
- I would really like to give a shout out to the Alexa developer community for always being there to help each other, including me many times when I needed it!” says Stuart Pocklington, the developer of Loop It. Grand Prize Winner ($20,000) – Loop It (watch the submission video, try the skill): You can create amazing sounding loops by choosing your favorite drum, bass, and melody loops.
- Bonus Prize - Best Multimodal Living Room experience ($3,000 USD) -Crazy Conversations (watch the submission video, try the skill): You will be given a clue that sounds like nonsense and you need to figure out what it is trying to say.
- Bonus Prize - Best Multimodal Morning experience ($3,000 USD) - Poet Challenge (watch the submission video, try the skill): You can learn about British poetry by playing fun games.
Using Wake Word Acoustics to Filter Out Background Speech Improves Speech Recognition by 15%
- Rather than training a separate neural network to make this discrimination, we integrate our wake-word-matching mechanism into a standard automatic-speech-recognition system.
- Finally, the attention mechanism tells the decoder which elements of the encoder’s summary vector to focus on when producing an output.
- In a sequence-to-sequence model, the attention mechanism’s decision is typically based on the current states of both the encoder and decoder networks.
- In addition to receiving information about the current states of the encoder and decoder networks, our modified attention mechanism also receives the raw frame data corresponding to the wake word.
- During training, the attention mechanism automatically learns which acoustic characteristics of the wake word to look for in subsequent speech.
- In another experiment, we trained the network more explicitly to emphasize input speech whose acoustic profile matches that of the wake word.
How to Monitor Custom Alexa Skills Using Amazon CloudWatch Alarms
- Monitoring enables you to identify the root-cause of any errors and address those issues quickly.
- If you have a custom skill that uses AWS Lambda as the back end, follow the steps below to create alerts using Amazon CloudWatch alarms to get notified when there is a spike in errors for your skill.
- Once you have the error information being logged, the next step is setting a metric filter that you can use to track your errors from CloudWatch.
- Then, identify the log group for your skill and click on Create Metric Filter.
- In the filter pattern, enter “Error Message” (or the prefix from your logs on which you want to be alerted on).
- You want to be notified in case you see a rise in errors (identified by your metric filter).
- Click on Create Alarm for your metric filter.
Google Assistant and Alexa can now answer questions about U.K. passports, bank holidays, more
- If you’re based in the U.K. and you’ve got a speaker, smartphone, or smart display powered by Google Assistant or Amazon’s Alexa, you’ll be able to get answers to those questions hands-free starting this week.
- The thousands of newly searchable factoids are the work of a small team within GDS tasked with making it easier for search and knowledge engines — including Google Assistant, Alexa, and other intelligent assistants — to parse and source data from Gov.UK.
- Late last summer, they tapped Schema.org — a joint effort to improve the web by adding structured markup to webpages — to implement schemas for informational and news articles and step-by-step guides, and they pledged to integrate “more concise” answers into future and existing Gov.UK content.
- For its part, Amazon has made a concerted effort over the past few months to supply Alexa with new data sources.