Tell us a little bit about yourself and we'll be in touch soon.
* indicates required field✕
Recently, a company known as MindMeld, which provides voice search technologies, surveyed US smartphone users and found that 60 percent had started using voice search within the past year.
Sam Vasisht is CMO of voice activation platform MindMeld. So he’s bullish on technology, but he says communicating more electronically doesn’t necessarily mean less personal interaction and now it’s easier to keep in touch with more people.
Inspired by Spock, this startup aims to make computer-human communication happen at high speed for us non-Vulcans. Well, almost. MindMeld is a leader in upgrading text interfaces to natural voice interfaces. They are a lot faster and more fun to use.
As Tim Tuttle, CEO at MindMeld, tweeted about automatic speech recognition, “Accuracy improvements in the past 2 yrs have dwarfed all improvements over the past 30 yrs combined”.
In 2015, voice search soared to 10 percent of all search volume globally. According to Timothy Tuttle of MindMeld, that’s a jump from a statistical zero to 50 billion searches a month that are now being performed by voice search. The numbers are corroborated by an eMarketer report that cites even higher usage.
According to the “User Adoption Survey” commissioned by predictive speech analytics specialist MindMeld, use of voice-based intelligent assistants has reached a tipping point and is heading toward mass acceptance.
Regarding some of the companies that Samsung has been spending its money on, AI start-ups such as Vicarious, Idibon, MindMeld, Reactor Labs, Automated Insights and Maluuba happen to be just some of the startups that have received venture capital funds from the South Korean firm.
One company hot on the trail of natural language understanding is MindMeld. MindMeld provides its natural language understanding capabilities to other companies that are looking to add intelligent voice interfaces to their products, services, or devices.
But while Amazon may own the mindshare now, according to a study by MindMeld, only 4% of all smartphone users have used Alexa.
Over half of voice assistant users (55%) in MindMeld’s January 2016 survey said they used smartphone voice assistants daily or weekly an increase from the previous quarter, when 49% said so.
The team at MindMeld, a company pioneering the development of technology to power a new generation of voice-driven applications, recently shared results from a user adoption survey and offered some interesting insights into how voice-assisted Conversational UI continues to gain traction.
During the Virtual Assistant Summit hosted by events company RE.WORK in San Francisco on 28-29 January, Tim Tuttle, founder and CEO of MindMeld...gave the audience some telling statistics.
In December, the makers of voice question-and-answer platform MindMeld released results from a survey of 1,800 adult smartphone users.
A survey of 1,800 U.S. adults taken last fall showed 63.3% of smartphone users had used voice-based search, with the largest percentage of those respondents, 41.6%, using it for the first time in the prior six months, according to MindMeld, a voice-search technology company.
Finally, there are a few startups that blossomed this year, like SoundHound and MindMeld, moving into the intelligence terrain. They feel they can outsmart the tech giants and, perhaps as critically, circumvent them.
Voice search and virtual assistants hold enormous promise (and “disruptive” potential)... I recently spoke with MindMeld CEO Tim Tuttle about voice assistant adoption.
"Vamos a intentar que hablar con nuestras máquinas sea una realidad como en las películas de ciencia ficción", lo dice Tim Tuttle, CEO de MindMeld, un apasionante proyecto que pretende ir más allá que conformarse ser un eficiente asistente de voz, y que está condenado a competir con Siri.
MindMeld acaba de anunciar que está lanzando un sistema de reconocimiento de lenguaje y respuesta inteligente capaz de ser adaptado al contenido que sea...
Dicen que MindMeld responde mejor que Siri, Cortana o Amazon Echo a preguntas con un lenguaje más natural como "En qué películas de Steven Spielberg aparece Harrison Ford" o "Muéstrame películas de Harry Potter".
MindMeld (formerly Expect Labs) launched a new platform that companies can use to add voice interfaces, including question-answering and language-understanding capabilities, to apps and devices.
There’s a single phrase that captures MindMeld’s intent with its new platform and capabilities and that is to “achieve scale” as growth in voice and natural language search and device control accelerates in the coming years.
San Francisco based Expect Labs, which previous positioned itself as a kind of "Google Now in a Box" or "Siri in a Box" for third party developers has changed its name to MindMeld. The company is also launching a second generation of its technology, which it now describes as a "platform for creating large-scale language-understanding and question-answering capabilities on apps and devices for any custom content domain."
With MindMeld 2.0, an organization can build a voice-enabled system that uses natural language technology to offer human-like responses for custom content, like proprietary datasets. Businesses can use MindMeld via the cloud or buy an on-premises version.
"We're trying to make it possible for us to talk to our machines, like in science fiction movies," company CEO Tim Tuttle told Business Insider.
While this scale of automated customer service is indeed impressive, the role of “artificial intelligence for customer service” promises to go beyond current state-of-the-art intelligent assistants deployments and usher in a new level of sophisticated, highly accurate customer interactions. This was a topic of a panel discussion, “Beyond Q&A – AI for Intelligent Assistants”, held at IAC NYC 2015 that included Amtrak as well as Timothy Tuttle, PhD, CEO & founder of Expect Labs/MindMeld, and Andy Mauro, Senior Director, Cognitive Innovations Group, Nuance Communications.
What has the power to deliver higher engagement, greater satisfaction and increased loyalty? The answer is voice-enabled search. By enabling voice search instead of traditional type-and-click, searching for products becomes three to four times faster. Sam Vasisht, CMO of [MindMeld], insists that artificial intelligence has cracked the code on voice technology.
Another perfect storm of market conditions is brewing for a second wave of virtual personal assistants and conversational interfaces, exceeding the first in both intelligence and pervasiveness.
We're speaking with Tim Tuttle, founder and CEO of a company called MindMeld.
CEO Tim Tuttle explained that MindMeld tends to use the speech recognition already available in most devices and instead focuses on natural language understanding and building a knowledge graph of the available information.
Expect Labs CEO Tim Tuttle has a vision for voice. In a talk at the DATAVERSITY® Smart Data 2015 Conference, Tuttle details the recent history of voice processing and how the field has advanced at warp speed.
Voice search is becoming more and more important as service providers offer video content from many different sources and platforms. We talked to Expect Labs CEO/founder Tim Tuttle, whose company develops voice-driven applications.
There are also artificial intelligence startups, often based on deep learning, that specialize in outsourcing a variety of these sci-fi tasks. Expect Labs specializes in voice search.
"Whoever creates the intelligent assistant will be the first place people go to find things, buy things, and everything else," former AI researcher Tim Tuttle, CEO of the voice interface firm Expect Labs, said last week.
If you haven’t tried Google Voice Search, Apple’s Siri, Microsoft’s Cortana, or even Amazon.com’s Echo "smart" speaker recently, you may be surprised how much better they’re working than even six months ago.... "Recent AI breakthroughs have cracked the code on voice, which is approaching 90% as good as human accuracy," says Tim Tuttle, CEO of Expect Labs, which began offering its MindMeld cloud-based service last year to help any device or app create voice interfaces.
The really big improvements will come first in products developed by the likes of Google, Apple and Microsoft. Call centers will take a little bit more time, said Tim Tuttle, the CEO of AI company Expect Labs.
"All of the best intelligent system technology is cloud-based today," says Expect Labs’ Tuttle. "In the future, that is not necessarily going to be the case. In three to five years, it’s certainly possible that a large percentage of the computation done in the cloud today could conceivably be done locally on your device on the fly. Essentially, you could have an intelligent system that could understand you, provide answers on a wide range of topics, and fit into your pocket, without any connectivity at all."
Through advanced language understanding technology, MindMeld can provide detailed information about possible conditions and where to turn to for help.
Sense.ly, an avatar based platform that helps clinicians better manage their chronic care patients, has announced that it has partnered with Expect Labs, the company behind the MindMeld platform for building AI-powered voice-driven applications, to bring artificial intelligence to its app.
One of the most interesting [virtual nurses] is Sense.ly, a new service built on top of the code designs offered by Expect Labs' MindMeld service.
Fetch uses Expect Labs' MindMeld, which bills itself as "the first platform for building advanced voice-driven applications."
Tuttle believes that up to 40 percent of the apps launched for mobile devices will come with voice recognition by the end of this year. Three years from now, he says, voice will be the primary user interface with our devices. "And we’re still just scratching the surface," he adds.
Eventually, if the recommendation algorithm works well enough, it could be used to suggest things direct to users without human intervention, says Tim Tuttle, chief executive officer of Expect Labs, the company supplying the AI. A single tap would trigger the entire process of automated fulfillment and delivery.
Oakland-based Fetch says it now has the first AI-powered smartwatch app, thanks to MindMeld technology developed by San Francisco-based Expect Labs.
On Tuesday, Fetch announced an integration with MindMeld’s artificial intelligence technology to better interpret on-demand voice requests on its Apple Watch app. This partnership makes Fetch, an on-demand personal shopping service, the first third-party AI-powered app for the Apple Watch.
Mobile commerce startup Fetch has joined forces with Expect Labs to add artificial intelligence to its top-rated voice activated Apple Watch concierge app.
This means Apple Watch owners can make requests through their smartwatch. Yes, the future is fully here, thanks in part to MindMeld's API.
If you wanted to buy, say, a sweet messenger bag someone was rocking in SoHo, you could snap a photo, send it along, and someone would eventually respond with the cheapest, most appropriate listing they could find. With Expect Labs' voice recognition and analytical chops now being baked into the existing iOS/Apple Watch app, though, those requests can be chopped up and acted on more quickly.
With more than 1,300 companies currently using MindMeld to integrate voice into their applications, devices and websites, Expect Labs is clearly gaining traction.
Expect Labs has developed MindMeld, a platform that enables app developers to integrate sophisticated voice recognition interface technology into their products. "Many companies are scrambling to integrate voice into their apps,” said Tim Tuttle, founder and CEO. "It's a huge undertaking."
"We are not close to creating anything remotely humanlike," said Tim Tuttle, CEO and founder of AI firm Expect Labs. "What we are close to is designing systems that are good at very specific tasks."
So far, 1,300 companies have signed up for Expect Labs’ MindMeld, a technological platform that allows companies to add voice recognition capability — think Siri or Google Voice — to their product. Theoretically, a law firm could use MindMeld so that people could ask its website questions, rather than clicking on banners or typing in search terms.
For Expect Labs, context is everything... For instance, if a driver is in the car at 8 a.m. and asks the navigation system to find a coffee shop, the car can surmise that the driver is likely on the way to work and can identify a coffee shop along the usual route. The idea is that these voice-powered interfaces should be able to process more than the simple command and respond the way a human would.
... due to the emergence of a small but growing number of cloud based APIs like MindMeld, it’s now possible for developers to build an intelligent voice interface for any app or website without requiring an advanced degree in natural language processing.
Some of the advances that we've seen in [automatic speech recognition] recently are going to enable a new generation of voice-driven applications over the next few years that are going to fundamentally change how we interact with computing devices.
The latest AI dawn owes much to new programming techniques for approximating "intelligence” in machines. Foremost among these is machine learning, which involves training machines to identify patterns and make predictions by crunching vast amounts of data… In the case of Expect Labs, this means a voice-activated service that companies can use to help their customers do things like search through their online catalogues.
Recent A.I. breakthroughs have cracked the code on voice, and in perhaps 18 months, machines will be able to follow spoken instructions even better than humans do…
…recent breakthroughs in speech recognition and artificial intelligence will soon make gadgets dramatically better at understanding people. This new breed of highly competent machines, which are able to not only hear us but to understand context and nuance, is just a year or two away.
Expect Labs is poised to expand its speech recognition technology that predicts what you say - and what you intend to search for - before you say it.
Expect Labs shows off some of possible uses for its API, including searching through content on a video service, finding products through online retailers, and finding and working through recipes in the kitchen without touching your device’s screen.
MindMeld platform maker Expect Labs has secured $13 million in Series A funding. IDG Ventures and a USAA subsidiary led the round. Other investors included Intel Capital, Samsung Ventures, Telefónica Digital, Liberty Global Ventures, Fenox Ventures, Westcott, Quest Venture Partners, Google Ventures, Greylock Partners, Bessemer Venture Partners and KPG Ventures.
There’s more to speech recognition apps than Siri, Cortana or Google voice search, and a San Francisco startup called Expect Labs aims to prove it.
Expect Labs, a San Francisco-based developer of an API that lets developers plug in an intelligent voice interface, has raised $13 million in Series A funding. IDG Ventures and USAA co-led the round, and were joined by such backers as Samsung, Intel Capital and Telefonica.
Expect Labs, a company providing intelligent voice interfaces for applications and devices, announced it has raised $13 million in new funding.
MindMeld, as the service is called, combines advanced machine learning and speech recognition algorithms into an integrated whole that the startup says can not only understand what a user is asking but piece together an appropriate answer based on a variety of information sources.
"You want your computing devices to pay attention to what you’re doing, so they can do a better job of anticipating what information you might need," says Tim Tuttle, chief executive of Expect Labs...
With around 1,000 businesses already using the API… [the MindMeld API] is advanced enough to enable a whole new approach to using speech recognition in the workspace, at home and in the car.
After performing a few voice searches with a demo of MindMeld’s shopping search, I found it to be accurate and fast. It feels like Siri or Google Now, except entirely for shopping.
Expect Labs is expanding the scope of its MindMeld API with a new offering focused specifically on enabling voice-powered mobile recommendation apps for retailers.
… it's not just the big names of tech who are running in the digital assistant race: Plenty of smaller companies are also competing for a piece of that sweet predictive analytics pie, and one of them looks especially formidable.
[Expect Labs will] help In-Q-Tel and the government agencies it works with to create better tools for digging through large sets of information using the same MindMeld technology it offers to anyone else.
The audience votes have been tallied, the judges have convened, and now we’ve got this year’s winners for our MobileBeat Innovation Showdown. Expect Labs, a company that has developed innovative voice-powered prediction technology, is the winning early-stage company…
… [the world] needs smart apps that are easier to use. The MindMeld API seemingly provides that future.
Everyone wants to implement smart voice integration into their devices and apps — but few have the expertise to do so. Now Expect Labs, a startup that has created some powerful predictive computing technology, is giving developers one way to make that dream come true.
Expect Labs has released a new service that specializes in voice-powered searches for movies and TV shows (and, theoretically, anything). The company is hailing it as a victory for couch potatoes, who could be saved the effort of lifting their fingers to find stuff to watch, but it’s content providers that would really win…
A new wave of predictive- and real-time-technology startups are redefining how sharp smart tech can be.
Expect Labs offers an API that may create new opportunities for consumer engagement
Expect Labs has revamped its MindMeld artificial intelligence API to support new languages, more devices and better speech recognition.
MindMeld – you may know the term best from StarTrek and those fun-loving Vulcan practices. But it lives too at Expect Labs, as an app that listens to and understands conversations and finds relevant information within them, and as an API that lets developers create apps that leverage contextually-driven search and discovery – and may even find the information users need before they explicitly look for it.
[Expect Labs and] several companies are now coming up with new software development kits and APIs for developers to take AI to the next level.
[Expect Labs is] on the forefront of building this kind of persistent search and analysis architecture in the field of natural language processing.
…artificial intelligence is getting even smarter. The next wave of behavior-changing computing is a technology called anticipatory computing - systems that learn to predict what you need, even before you ask.
With [MindMeld], Expect Labs has created a whole new approach to video chat… and it's something that portends the next wave of predictive innovation.
… what Expect Labs is offering could be likened to 'Google Now in a Box.' Developers and publishers will thus be able to offer predictive or contextual search and more sophisticated recommendations and content discovery to users.
[The MindMeld API is] the latest example of just how powerful APIs are becoming and offers yet another glimpse into how intelligent we will expect applications to be in the years to come.
Anticipatory computing startup Expect Labs released an API Wednesday to help developers build smarter contextual-based search and content discovery.
MindMeld uses natural language processing to understand your context and then serve up relevant information before you’ve even asked for it. The new API will allow developers to add a highly-customizable layer of intelligence to their apps.
Expect Labs and the companies on the list represent the disruptive innovations most likely to change our lives.
With Siri, Apple promised a system that could fulfill the long-held promise of AI. But to many experts in the field, the system is a letdown. One startup thinks it can do better.
My first call was to Tim Tuttle, the CEO of San Francisco–based start-up Expect Labs. Tuttle, who got his PhD from the MIT artificial-intelligence lab, led the team that created a virtual-assistant program called MindMeld, which has been billed as a 'supercharged Siri.'
It wasn't long ago that everyone thought Siri would revolutionize the way we interact with our phones. That never quite happened, but new technology could fulfill the promise of a fully interactive AI personal assistant.
[MindMeld] understands your conversations, and filters through mentioned topics to recommend relevant information before you even know you need it.
MindMeld listens to what you're saying, understands it, finds information related to it, and shows it to you without you having to go out and look for it.
New apps that listen to conversations or scan emails and calendars can predict and provide information such as websites, videos and maps to users before they ask for them or realize they want them.
Expect Labs' technology… is really the fulfillment of the vision behind Google Now: real-time, useful information that dynamically changes based on context.
Imagine a world where devices are so smart they surface the information you're looking for the moment you think of it… One of the leading companies in this space is Google- and Samsung-backed Expect Labs.
MindMeld… is one of the more ambitious application ideas out there.
This idea of anticipatory computing is going to be the next big change in our relationship with computers. And it's coming more quickly than you realize… By following along and adding context where it can, MindMeld can make a call more fruitful.
Expect Labs’ MindMeld is another instance of high-level computing being applied to personalization. The iPad app, which showcases the company’ "anticipatory computing engine,” hasn’t even been released to the public yet, but it’s attracted the attention of numerous industry leaders…
… sooner or later, [anticipatory computing is] likely to become part of everyone’s computing experience.
The emergence of anticipatory computing – one of the pillars supporting the current and future design and evolution of the Web and the user experience (UX) – is crucial to the equation that is content discovery… The ultimate goal is to continue making our lives easier.
If Expect Labs [has its way], the ACE will find itself embedded in a slew of devices and services all around you, waiting to dish out data at a moment’s notice.
Samsung, Intel, Google Ventures and Telefonica are already on board, and now San Francisco startup Expect Labs can count the world’s largest broadband provider outside the U.S. as a backer of its vision of anticipatory computing.
Artificial intelligence that knows what we need before we need it is certainly an exciting proposition, and Expect Labs, without even releasing a product, has attracted a group of investors that are each major players in their respective fields.
MindMeld goes further than [Siri], helping users remember names and understand what’s happening better than a human can.
… an impressive display of what the company calls ‘Anticipatory Computing'.
… the Anticipatory Computing Engine analyses conversations while they're happening in order to have pertinent information at the ready before a user knows they need it.
[Expect Labs] could open up a new generation of predictive, voice-activated search that leaves Siri in its dust.
Expect Labs [is] building a Siri-like platform on steroids…
… Expect Labs now has buy-in from one of the largest players in each vertical: devices, software, hardware and carriers.
The backing of leading companies from the consumer device, semiconductors and telecommunications worlds is a significant endorsement of the company’s vision.
This is definitely a company to watch.
Expect Labs has attracted attention because its technology is in line with the general direction that search technology has been taking with the advent of wearable computers such as watches and glasses, and Internet-connected cars and TVs.
This buy-in from some very prominent partners speaks to the stickiness of Expect Labs’ vision of the future of computing.
That three major corporations with stakes in computing, mobile and home electronics would want to proactively invest in Expect Labs' tech is a no-brainer.
Speech recognition is becoming a more central part of a growing number of consumer electronics products, including smartphones, tablets and TVs, but Expect Labs wants to take what is currently offered to the next level.
Expect Labs’ Anticipatory Computing Engine has the incredible potential to learn what we’re looking for even before we search for it…
Will [users] be as open to TV screens and telephones that… suggest that right recipe, state capital, or song title? Some of the biggest names in the tech world are betting the answer is yes.
Expect Labs [is] a company that you should definitely keep your eye on.
… it’s important to note that tech companies are, quite literally, buying into the notion of "anticipatory computing” as the next major step forward in the way content and information are delivered to consumers.
… it appears the future of search isn't search at all.
Expect Labs has tapped big data startup Factual Inc. and its sizable store of location data in a bid to make its vaunted Anticipatory Computing Engine even smarter.
They’ve really hit a nice niche… That it doesn’t require explicit interaction is really quite exciting.
And we’re just scratching the surface of the potential of this industry. Anticipatory computing will be the trend to watch for years to come, and will effect everything from shopping to behavior to social media and back again.
… we can see great things in this app's future.
MindMeld searches for things relevant to your conversation.
A better way to brainstorm.
Mindmeld's speech recognition worked superbly, the app's UI looked absolutely spot on and sharing web pages between callers was instantaneous.
We're always showing you information that we think is related to what you're talking about.
… this voice-and-video caller offers a layer of functionality that I haven't yet seen in a communications app.
[Here are] some of the newest apps that will make your connected life more productive and entertaining.
The prospect for MindMeld, and other post-Siri applications, to be quick learners is what makes me quite bullish for prospects of better personal virtual assistants (or advisors) in 2013.
Expect is one of the companies I am most excited about as it represents big progress in how we consume information and interact with devices.
We’re now getting the first taste of a world where our computing devices fetch us relevant information without our asking.
Please hurry — I (and more than a few others, naturally) want to start playing with this as soon as humanly possible.
Expect Labs thinks it has what it takes to read your mind, and it’s building the MindMeld iPad app equipped with its "anticipatory computing engine” to prove it.
By continuously analyzing signals such as location, voice, and online activity, the Expect Labs platform creates a real-time context model, which can be used to anticipate and filter any information that a user may need.
Just when you thought competition in search was effectively over, it starts to get interesting again.
It's a great example of what the future of mobile applications may look like.
… Google Ventures-backed Expect Labs analyzes voices during chats to deliver appropriate Web pages without users having to enter search terms. The San Francisco-based company has been working in stealth since 2010, developing technology to listen in on conversations, analyze them and then suggest related websites the speakers might find helpful. The company will launch MindMeld, a video chat app for the iPad, later this year.
… If a group is planning to meet for sushi in Las Vegas, the panel will be populated with restaurant listings, reviews, and reservation options. Or if an executive is conducting a job interview by videoconference, the candidate’s LinkedIn profile and related Web links would fill the screen…
… he co-founded Expect Labs in San Francisco to build software for a time when humans no longer hunt down information by typing queries into a text box on a computer. Instead, Tuttle sees the day coming when searches take place automatically based on things people are saying and doing…
The Expect Labs team has written nine patents covering the technology, which combines real-time language analysis with data signals. Signals including geolocation and past behavior…
… more advanced than Siri or Google Now, the service doesn’t wait for you to ask it to search, it just searches in the background while you’re carrying on a normal conversation.
Tim Tuttle intends to give Siri and Google Now a run for their money with MindMeld, an iPad app slated to launch in the App Store this fall, with an iPhone app soon to follow.
… MindMeld iPad app looks pretty killer. The software supposedly can listen in the background of a voice chat and predictively grab information from the Internet for users to interact with and share with each other.
Expect Labs, the company that is helping usher in the era of anticipatory (or predictive) computing says it has received a big cash infusion from well known investors such as Greylock Partners and Google.
… it’s a big step forward in the way we’ll interact with our computing systems, or at least that’s what Expect Labs will be working on with the $2.4 million it received in investor funding today from Google Ventures and Greylock Partners.
With the power to deliver search results before you even type a query, anticipatory computing engine Expect Labs Inc. has raised seed funding to further scale its technology.
Expect Labs’ big draw is what it calls its Anticipatory Computing Engine, a system that aims to analyze "multi-party conversations” and provide contextual information in real-time without requiring input from the user.
Devices that listen to your every word and search for information before you even know to ask.
Depending on what types of food, drinks and possible meeting places are mentioned, MindMeld "listens” in the background and pulls up pertinent restaurant suggestions, reviews, maps, images and phone numbers using data from across the web and social networks.
What’s most important to Tuttle, who co-founded early video search engine Truveo (acquired by AOL), is the technology Expect has built, which he wants to provide to other calling apps.
A San Francisco-based company called Expect Labs introduced an iPad app called MindMeld this week at the TechCrunch Disrupt conference. The app is a voice-call and video-chat program. Based on what it hears people say during an online conversations, it initiates its own search without any other action by the user.
For the Battlefield, we’ve narrowed down the 30 companies in our startup battlefield to seven startups competing for the grand prize: the Disrupt Cup and a $50,000 stipend… Expect Labs’ MindMeld app analyzes your conversations in real time and pulls pertinent information so you’re always in the know.
Put this app on a WiFi – which we did – and everything from picture quality to voice latency and information being pushed to the screen was pretty flawless. Sure, it was a demo on a system used by no more than a dozen people, including all eight of the company’s employees, but I have seem many demos in my time. Someday, Siri will work as flawlessly as this app and will get an A-plus from me.
The idea here, Tuttle said, is that MindMeld is at the forefront of the kind of technology that will surround us everywhere in a few years and that will be always on and constantly paying attention in order to serve up information when we want it, or possibly even before we know we want it.
Wouldn’t it be easier if there were a piece of software that could analyze your conversations in real time and pull pertinent information so you’re always in the know? Of course, which is why Expect Labs’ MindMeld iPad app has the potential to advance natural-language recognition and push the envelope where Google, Microsoft, and Apple have failed so far.
What’s most important to Tuttle, who co-founded early video search engine Truveo (acquired by AOL), is the technology Expect has built, which he wants to provide to other calling apps.
MindMeld releases Q1 2016 survey results citing record-breaking adoption and usage.
Innovative MindMeld 2.0 platform gives any business state-of-the-art AI technology to streamline product and content discovery and automate customer support.
MindMeld and Sense.ly are leveraging the latest advances in artificial intelligence to build customized virtual assistant applications for the healthcare industry.
As smartwatch and smartphone commerce heats up, MindMeld teams with Fetch to give everyone their own AI-powered personal buying assistant.
Expect Labs' MindMeld is the first cloud-based service designed to power the emerging generation of voice-driven applications.
The MindMeld API now lets any developer create a next-generation, voice-driven shopping experience in minutes.
Expect Labs' partnership with In-Q-Tel will drive development of the MindMeld API for U.S. government applications.
The MindMeld API now lets any developer create next-generation voice interfaces in minutes.
The MindMeld API enables developers to add intelligent contextual search and recommendations to any website, app, or device in minutes.
Expect Labs and the companies on the list represent the disruptive innovations most likely to change our lives.