Apple announces its 2021 Apple Design Award winners

Apple incorporated the announcement of this year’s Apple Design Award winners into its virtual Worldwide Developer Conference (WWDC) online event, instead of waiting until the event had wrapped, like last year. Ahead of WWDC, Apple previewed the finalists, whose apps and games showcased a combination of technical achievement, design and ingenuity. This evening, Apple announced the winners across six new award categories.

In each category, Apple selected one app and one game as the winner.

In the Inclusivity category, winners supported people from a diversity of backgrounds, abilities and languages.

This year, winners included U.S.-based Aconite’s highly accessible game, HoloVista, where users can adjust various options for motion control, text sizes, text contrast, sound, and visual effect intensity. In the game, users explore using the iPhone’s camera to find hidden objects, solve puzzles and more. (Our coverage)

Image Credits: Aconite

Another winner, Voice Dream Reader, is a text-to-speech app that support more than two dozen languages and offers adaptive features and a high level of customizable settings.

Image Credits: Voice Dream LLC

In the Delight and Fun, category, winners offer memorable and engaging experiences enhanced by Apple technologies. Belgium’s Pok Pok Playroom, a kid entertainment app that spun out of Snowman (Alto’s Adventure series), won for its thoughtful design and use of subtle haptics, sound effects and interactions. (Our coverage)

Image Credits: Pok Pok

Another winner included U.K.s’ Little Orpheus, a platformer that combines storytelling, surprises, and fun and offers a console-like experience in a casual game.

Image Credits: The Chinese Room

The Interaction category winners showcase apps that offer intuitive interfaces and effortless controls, Apple says.

The U.S.-based snarky weather app CARROT Weather won for its humorous forecasts, unique visuals, and entertaining experience, which is also available as Apple Watch faces and widgets.

Image Credits: Brian Mueller, Grailr LLC

Canada’s Bird Alone game combines gestures, haptics, parallax, and dynamic sound effects in clever ways to brings its world to life.

Image Credits: George Batchelor

A Social Impact category doled out awards to Denmark’s Be My Eyes, which enables people who are blind and low vision to identify objects by pairing them with volunteers from around the world using their camera. Today, it supports over 300K users who are assisted by over 4.5M volunteers. (Our coverage)

Image Credits: S/I Be My Eyes

U.K.’s ustwo games won in this category for Alba, a game that teaches about respecting the environment as players save wildlife, repair a bridge, clean up trash and more. The game also plants a tree for every download.

Image Credits: ustwo games

The Visuals and Graphics winners feature “stunning imagery, skillfully drawn interfaces, and high-quality animations,” Apple says.

Belarus-based Loóna offers sleepscape sessions which combine relaxing activities and atmospheric sounds with storytelling to help people wind down at night. The app was recently awarded Google’s “best app” of 2020.

Image Credits: Loóna Inc

China’s Genshin Impact won for pushing the visual frontier on gaming, as motion blur, shadow quality, and frame rate can be reconfigured on the fly. The game had previously made Apple’s Best of 2020 list and was Google’s best game of 2020.

Image Credits: miHoYo Limited

Innovation winners included India’s NaadSadhana, an all-in-one, studio-quality music app that helps artists perform and publish. The app uses A.I. and Core ML to listen and provide feedback on the accuracy of notes, and generates a backing track to match.

Image Credits: Sandeep Ranade

Riot Games’ League of Legends: Wild Rift (U.S.) won for taking a complex PC classic and delivering a full mobile experience that includes touchscreen controls, an auto-targeting system for newcomers, and a mobile-exclusive camera setting.

Image Credits: Riot Games

The winners this year will receive a prize package that includes hardware and the award itself.

A video featuring the winners is here on the Apple Developer website.

“This year’s Apple Design Award winners have redefined what we’ve come to expect from a great app experience, and we congratulate them on a well-deserved win,” said Susan Prescott, Apple’s vice president of Worldwide Developer Relations, in a statement. “The work of these developers embodies the essential role apps and games play in our everyday lives, and serve as perfect examples of our six new award categories.”

read more about Apple's WWDC 2021 on TechCrunch

#a-i, #apple, #apple-inc, #apple-watch, #apps, #awards, #belarus, #belgium, #companies, #computing, #denmark, #games, #gaming, #india, #ios, #league-of-legends, #loona, #susan-prescott, #text-to-speech, #united-states, #wwdc, #wwdc-2021

0

Google Cloud lets businesses create their own text-to-speech voices

Google launched a few updates to its Contact Center AI product today, but the most interesting one is probably the beta of its new Custom Voice service, which will let brands create their own text-to-speech voices to best represent their own brands.

Maybe your company has a well-known spokesperson for example, but it would be pretty arduous to have them record every sentence in an automated response system or bring them back to the studio whenever you launch a new product or procedure. With Custom Voice, businesses can bring in their voice talent to the studio and have them record a script provided by Google. The company will then take those recordings and train its speech models based on them.

As of now, this seems to be a somewhat manual task on Google’s side. Training and evaluating the model will take “several weeks,” the company says and Google itself will conduct its own tests of the trained model before sending it back to the business that commissioned the model. After that, the business must follow Google’s own testing process to evaluate the results and sign off on it.

For now, these custom voices are still in beta and only American English is supported so far.

It’s also worth noting that Google’s review process is meant to ensure that the result is aligned with its internal AI Principles, which it released back in 2018.

Like with similar projects, I would expect that this lengthy process of creating custom voices for these contact center solutions will become mainstream quickly. While it will just be a gimmick for some brands (remember those custom voices for stand-alone GPS systems back in the day?), it will allow the more forward-thinking brands to distinguish their own contact center experiences from those of the competition. Nobody likes calling customer support, but a more thoughtful experience that doesn’t make you think you’re talking to a random phone tree may just help alleviate some of the stress at least.

#artificial-intelligence, #branding, #cloud, #contact-center, #developer, #enterprise, #google, #google-cloud, #tc, #text-to-speech

0

Azure’s Immersive Reader is now generally available

Microsoft today announced that Immersive Reader, its service for developers who want to add text-to-speech and reading comprehension tools to their applications, is now generally available.

Immersive Reader, which is part of the Azure Cognitive Services suite of AI products, developers get access to a text-to-speech engine, but just as importantly, the service offers tools that help readers improve their reading comprehension, be that through displaying pictures over commonly used words or separating out syllables and parts of speech of a given sentence.

It also offers a distraction-free reading view, similar to what you will find in modern browsers. Indeed, if you use Microsoft’s Edge browser, Immersive Reader is already included there as part of the distraction-free article view, together with its other accessibility features. Microsoft also bundled its translation service with Immersive Reader.

Image Credits: Microsoft

With today’s launch, Microsoft is adding support for fifteen of its neural text-to-speech voices to the service, as well as five new languages (Odia, Kurdish (Northern), Kurdish (Central), Pashto and Dari) from its translation service. In total, Immersive Reader now supports 70 languages.

As Microsoft also announced today, the company has partnered with Code.org and SAFARI Montage to bring Immersive Reader to their learning solutions.

“We’re thrilled to partner with Microsoft to bring Immersive Reader to the Code.org community,” said Hadi Partovi, Founder and CEO of Code.org. “The inclusive capabilities of Immersive Reader to improve reading fluency and comprehension in learners of varied backgrounds, abilities, and learning styles directly aligns with our mission to ensure every student in every school has the opportunity to learn computer science.”

Microsoft says it saw a 560% increase in use of Immersive Reader from February to May, likely because a lot of people were starting to look for new online education tools as the COVID-19 pandemic started. Today, more than 23 million people use it every month and Microsoft expects that number to go up once again in the fall, as the new school year starts.

#artificial-intelligence, #ceo, #code, #code-org, #computing, #freeware, #hadi-partovi, #microsoft, #microsoft-edge, #microsoft-windows, #partner, #reading, #reading-comprehension, #software, #text-to-speech, #windows-10

0

Google signs up Verizon for its AI-powered contact center services

Google today announced that it has signed up Verizon as the newest customer of its Google Cloud Contact Center AI service, which aims to bring natural language recognition to the often inscrutable phone menus that many companies still use today (disclaimer: TechCrunch is part of the Verizon Media Group). For Google, that’s a major win, but it’s also a chance for the Google Cloud team to highlight some of the work it has done in this area. It’s also worth noting that the Contact Center AI product is a good example of Google Cloud’s strategy of packaging up many of its disparate technologies into products that solve specific problems.

“A big part of our approach is that machine learning has enormous power but it’s hard for people,” Google Cloud CEO Thomas Kurian told me in an interview ahead of today’s announcement. “Instead of telling people, ‘well, ‘here’s our natural language processing tools, here is speech recognition, here is text-to-speech and speech-to-text — and why don’t you just write a big neural network of your own to process all that?’ Very few companies can do that well. We thought that we can take the collection of these things and bring that as a solution to people to solve a business problem. And it’s much easier for them when we do that and […] that it’s a big part of our strategy to take our expertise in machine intelligence and artificial intelligence and build domain-specific solutions for a number of customers.”

The company first announced Contact Center AI at its Cloud Next conference two years ago and it became generally available last November. The promise here is that it will allow businesses to build smarter contact center solutions that rely on speech recognition to provide customers with personalized support while it also allows human agents to focus on more complex issues. A lot of this is driven by Google Cloud’s Dialogflow tool for building conversational experiences across multiple channels.

“Our view is that AI technology has reached a stage of maturity where it can be meaningfully applied to solving business problems that customers face,” he said. “One of the most important things that companies need is to differentiate the customer experience through helpful and convenient service — and it has never been more important, especially during the period we’re all in.”

Not too long ago, bots — and especially text-based bots — went through the trough of disillusionment, but Kurian argues that we’ve reached a very different stage now and that these tools can now provide real business value. What’s different now is that a tool like Contact Center AI has more advanced natural language processing capabilities and is able to handle multiple questions at the same time and maintain the context of the conversation.

“The first generation of something called chatbots — they kind of did something but they didn’t really do much because they thought that all questions can be answered with one sentence and that human beings don’t have a conversation,” he noted and also added that Google’s tools are able to automatically create dialogs using a company’s existing database of voice calls and chats that have happened in the past.

When necessary, the Contact Center AI can automatically hand the call off to a human agent when it isn’t able to solve a problem but another interesting feature is its ability to essentially shadow the human agent and automatically provide real-time assistance.

“We have a capability called Agent Assist, where the technology is assisting the agent and that’s the central premise that we built — not to replace the agent but assist the agent.”

Because of the COVID-19 pandemic, more companies are now accelerating their digital transformation projects. Kurian said that this is also true for companies that want to modernize their contact centers, given that for many businesses, this has now become their main way to interact with their customers.

As for Verizon, Kurian noted that this was a very large project that has to handle very high call volumes and a large variety of incoming questions.

“We have worked with Verizon for many, many years in different contexts as Alphabet and so we’ve known the customer for a long time,” said Kurian. “They have started using our cloud. They also experimented with other technologies and so we sort of went in three phases. Phase One is to get a discussion with the customer around the use of our technology for chat, then the focus is on saying you shouldn’t just do chat, you should do chat and voice on a common platform to avoid the kind of thing where you get one response online and a different response when you call. And then we’ve had our engineers working with them — virtually obviously, not physically.”

He noted that Google has seen quite a bit of success with Contact Center AI in the telco space, but also among government agencies, for example, especially in Europe and Asia. In some verticals like retail, he noted, Google Cloud’s customers are mostly focused on chat, while the company is seeing more voice usage among banks, for example. In the telco business, Google sees both across its customers, so it probably made sense for Verizon to bet on both voice and chat with its implementation.

“Verizon’s commitment to innovation extends to all aspects of the customer experience,” said Verizon global CIO and SVP Shankar Arumugavelu in today’s announcement. “These customer service enhancements, powered by the Verizon collaboration with Google Cloud, offer a faster and more personalized digital experience for our customers while empowering our customer support agents to provide a higher level of service.”

#articles, #artificial-intelligence, #asia, #ceo, #cloud-computing, #dialogflow, #europe, #google, #google-cloud, #machine-learning, #natural-language-processing, #neural-network, #speech-recognition, #tc, #techcrunch, #technology, #text-to-speech, #thomas-kurian, #verizon-media-group

0