Hyper-connectivity and Artificial Intelligence

Hyper-connectivity and
Artificial Intelligence
How hyper-connectivity changes AI through
contextual computing
Chuan (Coby) M
04/03/15
Description of Major Sections
Background
Artificial intelligence (AI) is the intelligence exhibited by machines and software. In other
words, John McCarthy defined AI as “the science and engineering of making intelligent
machines” at 1955’s Dartmouth Conference 1.
Ever since its inception, Artificial intelligence’s development took very different ways and
subfields; for example, statistical methods, computational intelligence and traditional
symbolic AI have been the dominating methods of approaching AI. However, those
methods didn’t really integrate with each other 2. During the past decades, AI had made
significant progress and the general public had the first real view of AI when Deep Blue
defeated world chess champion Garry Kasparov in 19973. Even though the event is filled
with controversies and has been debated over the past decade, we saw first time that
machines were capable of performing tasks that we believed only could be done by
human brains.
Without getting too technical, let’s have a look at the current approaches to AI. Basically,
most of the current methods strongly rely on the computational power of computers.
Conventional AI uses logical programming and other computing techniques to evaluate a
decision; this methods relies on computer’s ability at Boolean evaluation and number
calculation4. Statistical method builds a huge database that contains a tremendous
amount of ‘past events’ or ‘history’ and uses pattern matching to evaluate the current
event; this methods require huge storage capacity and computing power. Other methods
such as computational neuroscience and artificial neural networks also rely on PC’s
sheer computing power. All those methods require a network of very powerful machines,
making the true AI only accessible by people and organizations with plenty of resources.
However, those approaches are not the most efficient way; a vast majority of the
computing performed were actually wasted.
Human brains perform a huge amount of ‘short-circuit evaluations’ because the amount
of input it gathers from the environment. For example, when you are driving on a bridge
and there is a road underneath the bridge that can take you to your destination faster,
you will not drive off the bridge and take the shortcut. However, consider if you just
switched on your GPS and entered the destination address while you are on the bridge,
1
http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html
2
Machines Who Think
3
IBM
4
Mark Archer: Building Intelligence: Why Current Approaches to Artificial Intelligence Won't
Work
Glassmeyer/McNamee Center For Digital Strategies |
2
it is very likely that the GPS will tell you to drive off the bridge and take the shortcut. A lot
of traditional algorithms are not designed to gather more inputs to help with the
evaluation. However, with more and more sensors and other input devices available,
contextual computing can change how AI works and create a disruption in this industry.
Firstly, AI researches and product development are no longer reserved by the ‘rich and
powerful’. Nowadays, we have seen more and more products that utilize some aspect of
AI and contextual computing with very minimal development resources.
Secondly, big firms that have built a tremendous amount of computing power and
storage power is no more competitive than its smaller competitors. With contextual
computing, the amount of computing power is less critical nowadays. Instead, the asset
in the old era could become a burden now.
Last but not least, contextual computing enabled by hyper-connectivity creates more
opportunities to reshape existing markets and to generate new business demands.
Similarly, big firms will need to better monitor this space for potential acquisition targets.
Hyper-connectivity will enable a new era of computing and impact artificial intelligence
through contextual computing; those impact could be disruptive in many cases.
Key Challenges
First and foremost, hyper-connectivity requires getting data from devices; the privacy
and data safety concern is one of the biggest challenges for businesses. For the last few
years, we have witnessed very high profile hacking incidents that very sensitive data
was leaked. Furthermore, users are getting more and more conscious about sharing
their data and they feel that sharing data invades their personal space to a certain
degree. As a results, businesses need to constantly protect users’ data and to gather
user data in a way that users are willing to share.
Secondly, there are more talks about the ownership of the data. Hyper-connectivity
makes it easier for services to collect data from users so that they can provider better
user experiences; but the ownership of the collected data remains debated. When firms
use this data to create economic benefit for themselves, should users get a share of the
money? More and more legislators start to work on this issue.
Last but not least, ‘mobile devices as input devices’ is only a small part of the hyperconnectivity. In order to have better quality and more varieties of data, a network of input
devices need to be deployed. In some case, those infrastructures can take huge amount
of capital and lengthy time to deploy; at the current stage, the ROI of such infrastructure
is unknown, making investment unadjusted. So far, similar infrastructures were deployed
by private owners for proprietary use in most cases; public access to the network is
rarely granted.
To sum it up, most of the challenges in this space is oriented around the uncertainty of
the regulatory issues, user acceptance and the economic benefits. However, if firms only
Glassmeyer/McNamee Center For Digital Strategies | Key Challenges
3
want to invest when some of the uncertainties are resolved, the market could become
super competitive. It is most important for businesses to come up with frameworks to
determine how they invest in this particular space.
Industries / Groups Impacted
B2C
The industry that is getting impacted is the software development/high-tech sector.
Regardless the size of the stage of the company, there will be significant impact.
Big and established firms can potentially face strong competition from smaller
competitors or even start-ups; without solid product offerings, big firms can become
obsolete in rather short time. We have seen similar trend happening in the mobile
industry. Nokia went from the market leader to zero market share in very short amount of
time.
Smaller firms and even individual developers can now develop AI related products
enabled by contextual computing and displace market incumbents. Through contextual
computing, AI becomes more algorithm and identifying what data to use rather than
sheer computing and storage power. Currently, most developers already utilized location
data that is available through mobile devices to create innovative products; as more
contextual data becomes available, developers will have more opportunities to create
new products.
B2B
For B2B side, software development (business, industrial) and IT services firms will be
impacted. Consumerization of IT plays a vital role in this shift. When enterprise users get
contextual enabled AI on their personal devices, the bar of business and industrial
solutions will also raise. Furthermore, as the benefit of hyper-connectivity and AI become
more obvious, users will demand such functionalities; existing solutions that cannot
develop such features will be slowly replaced by new products. Those impact will
happen much slower compare with B2C sector because purchasing decisions in the B2B
sector is much more complicated and time consuming.
Apart from impact to existing businesses, new business opportunities may emerge.
Hyper-connectivity is enabled by networks that provide data to flow. Infrastructures such
as data networks and input device networks may emerge as new business opportunities.
Furthermore, data aggregators can play a vital role in the future. Instead for developers
to get data from end users or input devices directly, they can get data from data
aggregators in standard format, saving development time and cost. Another reason for
data aggregators to exist could be regulator reason. Instead of dealing with data
protection and compliance, it would be easier for developers to outsource this part of the
job.
Glassmeyer/McNamee Center For Digital Strategies | Industries / Groups
Impacted
4
Business Benefit
The biggest business benefit for the contextual enabled AI is that this is mostly
unchartered space at the current stage; early movers can quickly gain market shares
and customers before competitions join. Furthermore, early movers have the opportunity
to shape the industry by setting users’ expectations in terms of User Interface, User
Experience, features and business models. In almost every industry, people can benefit
from smarter and more accurate AI; hence, there are plenty of opportunities to displace
existing players in the market.
For start-ups, it is easier for them to find better exist opportunities. Many big firms
currently do not have significant R&D activities in this space and are looking to acquire
established start-ups. It is relatively cheaper and less risky for them to do so. Without
many acquisition targets out there, firms can pay a premium on the price.
Case Examples
Waze
Waze is a GPS application for smart phones and it differentiates itself from traditional
GPS programs by giving users the option to upload information.
Waze succeeded by collecting and receiving contextual data in real-time to provide
better user experiences and better product functions. For example, the program receives
traffic information to let users know about traffic jams so that users can avoid the area
and its turn-by-turn navigation can plan better routes. Also, as more users use the
application, its traffic information gets more and more accurate. Apart from automatic
uploaded/downloaded information, users can also upload other reports, for example,
accidents, lane closures and speed traps. The information is then shared by other users
who drive in the same region. The application constantly upload the information to give
users the most up-to-date information.
Furthermore, the application also gets contextual information such as the time and your
current location. For example, when you get into your car around 5pm on a weekday, the
applications knows that you are most likely going to travel home from work and will ask
you whether you want to setup turn-by-turn navigation with traffic information so that you
can get home quickly. The application also learns your driving pattern so that it knows
your preference.
Due to its uniqueness and the better customer experience, Waze quickly attracted a
large number of users. In 2013, Google acquired Waze for a reported $1.3B.
TaKaDu
TaKaDu is a water network management software that uses real-time utility data and
translating the data into actionable insights.
Glassmeyer/McNamee Center For Digital Strategies | Business Benefit
5
One of the major functions for TaKaDu is leak detection. Traditionally, utility water
detection is done by experienced professionals inspecting from above the ground. Often,
leakage was not detected. Alternative, some equipment can detect leakage by
monitoring some factors such as pressure; however, this method cannot differentiate
real water usage from leakage and cannot detect very slow leaks. TaKaDu chose to use
a very different approach, it collects a variety of utility data, such as smart meter data,
pressure and reservoir levels, and uses the data to monitor water leakage events. The
system constantly gets data from the utility so that any event can be detected in near
real-time.
The system had some initial success in countries like Israel and Australia where water is
scarce. The system rely a network of smart meters, flow meters and other utility data;
hence, it cannot be deployed in US because such connectivity/infrastructure is not
available.
The company is growing its business globally at this stage.
BeaconsInSpace
BeaconsInSpace is a startup that helps application developers to find and rent beacon
networks. Beacon network is a critical infrastructure that used by applications for indoor
positioning. Traditionally, developers have to deploy their own beacons or negotiate with
beacon owners to use their beacon network, the process is extremely lengthy and costly
for most developers. BeaconsInSpace focuses on making the infrastructure that
provides the connectivity and contextual data available for public use.
Now, beacon owners know that they can make real money with their beacons and some
part of their ROI is known; furthermore, the needs for duplicated beacon network is
eliminated so that capital is used more efficiently. For application developers, they only
need to focus on what they do best: developing application without worrying about the
infrastructures. If we consider application developers as car makers, the traditional way
is that the car makers have to build roads first before they can make cars; now, there is
someone else building the road for them.
BeaconInspace is at its very initial stage and raised the seed round earlier 2015.
Summary for cases
For the last few years, we have seen a lot of companies working on AI and contextual
computing and many of them have exited. More recently, firms focusing on the
infrastructures for hyper-connectivity and contextual data start to appear. The ecosystem
is growing. We should closely monitor firms what focus on infrastructures for hyperconnectivity and contextual data; it is a good indicator for the wellbeing of the industry.
Glassmeyer/McNamee Center For Digital Strategies | Case Examples
6
Conclusion
Smarter and more human like algorithms will be more and more important in the world of
computing; and businesses will enjoy the benefits of such products. With hyperconnectivity enabled contextual data, Artificial Intelligent can be approached in a new
and cost effective way. Early movers who can crack the formula can gain a significant
market, therefore, economic benefits.
Recently, we have seen numerous examples of AI that is taking advantage of contextual
data and seen some success stories. However, the current level of contextual is
primarily location based, there is a huge amount of other types of contextual data waiting
for businesses and developers to be mined. Businesses focus on creating the
infrastructure of hyper-connectivity can play a vital role in the future too.
Glassmeyer/McNamee Center For Digital Strategies | Conclusion
7
About the Center
The Glassmeyer/McNamee Center for Digital Strategies at the Tuck School of
Business focuses on enabling business strategy and innovation. Digital strategies and
information technologies that harness a company's unique competencies can push
business strategy to a new level.
At the center, we foster intellectual leadership by forging a learning community of
scholars, executives, and students focused on the role of digital strategies in creating
competitive advantage in corporations and value chains. We accomplish this mission by
conducting high-impact research; creating a dialog between CIOs and their functional
executive colleagues; and driving an understanding of digital strategies into the MBA
curriculum.
We fulfill our mission by concentrating on the three following areas:
Scholarly Research
Connecting practice with scholarship anchored on IT enabled business strategy and
processes.
Executive Dialog
Convening roundtables focused on the role of the CIO to enable business strategy.
Curriculum Innovation
Bringing digital strategies into the classroom through case development and experiential
learning.
Glassmeyer/McNamee Center For Digital Strategies | About the Center
8