Latest Research
Feedback Loup: Clips

Feedback Loup: Clips

Yesterday Apple released Clips, a new app for iPhone users. Apple describes Clips as, “A new iOS app for making and sharing fun videos with text, effects, graphics, and more.” And Clips is fun, but it doesn’t show us the kind of augmented reality lenses and layers that we were hoping to see from Apple.

We’ve written a lot about how AR will change the way we interact with computers. Over the next several years, the smartphone will increasingly become a window through which users can see an augmented world. Players like Apple and Google are well-positioned to win the jump ball to own the dominant operating systems in that new paradigm. Google’s leadership in core disciplines like maps, data, and content make it an important incumbent. Apple’s leadership among app developers and payments will be important, but we think design is Apple’s trump card in AR. But Clips is more filters and effects than lenses and layers. There is an interesting real-time transcription capability, but unfortunately Clips is short on true AR.

In about 5 min. I was able to put together a short video with text, effects, filters, and music. Clips uses fairly rudimentary real-time computer imaging, but this could be the beginning of the underlying technology that will one day direct you to your seat in a stadium, overlay talking points during a presentation, or provide instructions as you assemble new furniture.

We’re surprised that Apple isn’t pushing the features in Clips to more users faster by integrating those features with iOS core functions like Camera or Messages. With Clips, Apple had three options:

  1. Fully integrate it into an existing iOS app (like photo filters in the Camera app).
  2. Release it as a standalone app pre-installed on iOS devices (like the Home app).
  3. Release it as a standalone app available for download in the App Store (like Remote app).

Apple chose option 3 for Clips. Eventually, we think AR functions will be fully integrated into not only the Camera app, but any app that wants to access the iPhones camera to overlay app-specific information over the real world. Clearly, Clips doesn’t give us much direction as to what Apple’s plans are for AR on the iPhone, but we expect more hints to drop soon. We expect Apple to reveal some AR capabilities when the company shows iOS 11 for the first time at WWDC in June. And we expect to see even more of an AR focus when the new iPhone is released this fall.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Apple, Augmented Reality, Feedback Loup
2 min. read Show less
Would You Let a Robot Do These 12 Jobs?

Would You Let a Robot Do These 12 Jobs?

This week we attended Automate, a robotics trade show in Chicago focused on manufacturing and fulfillment. As we explored the show floor, we saw a number of robots able to work next to humans for an affordable cost of $30-$40k each. Most vendors claimed less than a 12 month payback period. Automation is now in reach for many businesses, and they are slowly becoming comfortable implementing robots, in part for the cost advantage and part out of necessity to find labor.

We left the show with two questions. First, are we too conservative to believe it will take 30 years for 70% of human jobs to be replaced by robots?  Short answer: we’re optimistic and should have a better idea of how quickly automation will come over the next few years.  Second, how comfortable are consumers allowing robots to do certain jobs? To address this question, we surveyed 500 average consumers in the US and asked them to rate their comfort level with robots performing 12 specific tasks.

Survey methodology. We asked survey takers across a mix of age and income demographics to rate their comfort level with robots performing various tasks on a scale of 1-5: 1 being extremely uncomfortable, 3 being neutral, and 5 being extremely comfortable. We surveyed jobs in four categories: time-consuming chores, transportation, personal/family livelihood, and professional services.

Time-consuming chores. We found general acceptance in robots performing daily, relatively safe, time-consuming chores such as vacuuming, mowing the lawn, or preparing food.  Of all of the four categories of jobs we surveyed, the time-consuming chore category was the most positively viewed. This may be because these tasks have low downside if a robot fails to do them well; you just end up with a poorly vacuumed house, a butchered lawn, or a bad meal. The 30-44 age bracket, or loosely late millennials, consistently indicated the highest levels of comfort with this category (and most categories), while the 18-29 demo, or early millennials, surprisingly saw less benefit to robot vacuums or meal-makers – perhaps because many in this demo don’t perform the tasks themselves (see tables below).

Transportation. Our survey results indicated a general indifference to a robot controlling a subway or train ride, but less comfort with driving a car or piloting a plane. We believe this relates to the relative risk of downside due to robotic failure as mentioned in the prior section. It’s hard to imagine a catastrophic subway accident, but easy to imagine a major car accident or downed plane. Interestingly, the 40-59 demo, or Gen X, was the most uncomfortable of the demo groups, with on average 65% of responses indicating uncomfortable or extremely uncomfortable.  We suspect relatively risk-averse Gen X upbringings and presence of young or teenage children may account for these results.

Personal/family livelihood. Given the discomfort in letting a robot drive a car, it’s not surprising that most people aren’t ready to let robots do even more personal tasks like perform surgery or babysit. Again, downside risk to robotic failure is very apparent for both of these tasks, thus the lean toward discomfort. While babysitting requires empathy, which we view a one of the three key advantages humans hold over robots in addition to creativity and community, robots should theoretically be better surgeons than humans. Robo-surgeons have steadier and more precise instruments and can process more data about the body simultaneously than a human surgeon. Consistent high levels of discomfort to this category are seen across all demographics. Robo-babysitters scored an average of 20 points higher on the discomfort scale vs robo-surgeons.

Professional Services. Survey respondents were generally uncomfortable in relying on robots to perform professional services aside from personal training. We feel this again relates to risk. While there is no mortal risk to poorly performed taxes or legal services, there is a risk of financial cost from those failures. The 18-29 year old demographic showed the most comfort using robo-accountants or robo-lawyers.

Overall, we have some trust to build between consumers and robots to get to the automated future we envision, particularly with tasks perceived to be “dangerous.” As humans come to accept that robots can perform almost all “dangerous” tasks with greater safety than humans, the associated comfort levels should rise.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Robotics
3 min. read Show less
AI’s Busted Bracket

AI’s Busted Bracket

The Loup Ventures NCAA bracket contest isn’t as hotly contested as we thought it would be. We entered Bing’s AI bracket into our pool, and it’s just as busted as the others. In fact, Bing’s bracket will finish at the bottom of our pool, in 7th place, regardless of the outcome of tonight’s game. We would like to think that we outsmarted AI, but the reality is that predicting the outcome of the NCAA tournament is more a matter of luck than skill. Bing’s performance doesn’t mean it’s broken, just unlucky this year.

* Bing Predicts 2017 NCAA Basketball Bracket

To date, Bing has chosen 39 out of 67 games correctly, including the opening round. Bing was 2 of 4 in the opening round, 24 of 32 in the 1st Round, 9 of 16 in the 2nd Round, 4 of 8 in the Sweet Sixteen, before going 0 for 4 in the Elite Eight and ending its chances at victory. If you look at Bing’s bracket now, it will show a different story, because it re-picked winners for matches after each round. Even with this adjustment, it only picked 47 of 66 games correctly, leading into tonight’s game. In the adjusted rounds, Bing chose Final Four weekend right with Gonzaga and UNC as winners, with UNC ultimately taking home the crown.

How does Bing predict winners? First, it’s important to understand how Bing predicted its winners. The Bing Predicts algorithm factors in millions of data points in an effort to create the best predictive model. The algorithm looked at every college basketball game played in the last 15 years in an attempt to analyze correlations between measurable statistics and wins. The algorithm will give an output of the likelihood in which a team will win the game. It’s not meant to choose a certain winner, but the higher the percentage, the greater the disparity amongst the teams.

Walter Sun, an architect of the Bing Predicts algorithm, was asked by Wired Magazine about some of the important considerations in the algorithm. Defensive efficiency, strength of schedule, coaching rankings, and miles traveled were a few of the metrics that the algorithm measures.

For defensive efficiency, strong defensive teams have had more success in the tournament historically. For strength of schedule, the theory is that the seeding committee favors Power 5 conference teams, which leads to schools in smaller conferences being underrated. For coaching ranking, experienced coaches have a big impact on the success of their teams in the postseason. For miles traveled, teams that travel farther have a harder time winning, especially when they travel across time zones.

While it’s helpful to understand some of the metrics driving the Bing Predicts algorithm, taking a closer look at AI explains why the cards are stacked against Bing.

Why didn’t Bing give better advice? The probability that Bing assigns to any given team winning is very likely the most accurate prediction possible, but a probability is still a probability. If Bing says a team has an 80% chance to win, that means it has a 20% chance to lose. But picking winners in a bracket is binary. You are choosing a team to win, period. If the 20% chance happens, brackets are bound to be busted happens – and with 69 games to predict it’s almost a statistical guarantee.  

Bing could be better at NCAA predictions if it was able to collect data about future events leading up to the game. What will the hydration level of each individual player be at game time, how much rest did each player get the night before the game, what is going on with each player psychologically, etc. All of these incremental factors would influence what historical data predicts if it were possible to be known by Bing. Even then, every possession of the game can be influenced by random events and the flow of the game changes the probability of how teams play later in the game. The Patriots’ epic comeback in the Super Bowl is a perfect example. They had a 99%+ chance to lose, which means they had a less than 1% chance to win, but not a 0% chance to win.

What does this mean for the future of AI? We shouldn’t overreact. Every action we take as humans could be viewed as a predicted potential outcome and sometimes predictions are wrong. Take driving as an example. Every action we perform while driving is based on a predictive calculation in our heads about where we want the car to go and how to get it there. 1.3 million people die every year in traffic accidents, so we aren’t very good at those predictions. Machines already drive better than humans, because they don’t get distracted by phones or emotional about bad drivers.

While skeptics may laugh that the robots failed at predicting the outcome of a basketball tournament, AI is still a better predictor of outcomes than any human because of the amount of data it can incorporate in its prediction. We shouldn’t expect AI to be right 100% of the time because it won’t be, but it will be right more often than humans making the same predictions. Humans got lucky with our brackets this year, but shouldn’t expect an easy repeat next year.

Disclaimer: We actively write about the themes in which we invest: artificial intelligence, robotics, virtual reality, and augmented reality. From time to time, we will write about companies that are in our portfolio.  Content on this site including opinions on specific themes in technology, market estimates, and estimates and commentary regarding publicly traded or private companies is not intended for use in making investment decisions. We hold no obligation to update any of our projections. We express no warranties about any estimates or opinions we make.

Artificial Intelligence, Microsoft
4 min. read Show less