5G/IoT
seamless way to integrate short-range
mesh networks into the long-range
solutions (cellular and LPWAN) that provide
connections to the world outside
the plant.
AI EVERYWHERE
It’s been hard to miss all the hype about
artificial intelligence and how it will
transform everything it touches. The
latest trend is toward “AI at the edge” of
IoT deployments, the edge being where
the data is generated from devices,
typically sensors, that monitor various
characteristics of the equipment they’re
attached to. At the moment, nearly all
this data is sent to cloud data centers,
which causes two major problems:
high end-to-end latency and a massive
burden on the communications pathway
between the edge and the cloud. In addition,
as even a relatively small IoT deployment
can generate huge amounts of
data, it’s becoming increasingly obvious
that some of this processing should be
performed at the edge.
Reducing latency is far from trivial,
as the laws of physics dictate the minimum
time that can be achieved for a
signal traversing a given distance and
back. The least latency will always be
delivered over the shortest distance, taking
into consideration the processing,
computing, and other functions performed
along the way. For IoT, this is data
traveling outward from the edge device
to the cloud, and the return response
from the cloud to the device (Figure 2).
To alleviate these problems, the
goal is to split the tasks of processing
and analytics between the cloud and
the edge, which would reduce the
end-to-end latency to levels suitable
for real-time applications at the edge
and reduce the amount of data sent to
the cloud. Most of the attention to this
approach has focused on large-scale
IoT applications such as industrial production
facilities and “smart” cities, but
it will be a major component of 5G as
well, and for mostly the same reasons.
As it applies to 5G, most of the talk
about AI (and its subsets machine
learning and deep learning) focuses
on network management and other
high-level applications to reduce operating
costs through precision network
planning, capacity expansion forecasting,
autonomous network optimization,
dynamic cloud network resource
scheduling, among others. However, it
will eventually further expand its reach
even to smartphones that today rely
on the massive resources in the cloud.
For this to occur the
semiconductor industry
will need to develop
“on-device AI” realized
by dedicated coprocessors
or accelerators,
a market that has just
emerged and is growing
rapidly with more than
40 start-up companies
working on the problem,
along with the usual cohort
of deep-pocketed
silicon vendors.
The need for AI at the
edge is perhaps most
obvious for the autonomous
transportation
environment, as when
it arrives this application
Figure 2 – Reducing latency is essential for some
applications but less so for others. Source: GSMA
Intelligence.
inherently requires decisions to be
from data produced by sensors in a few
milliseconds or even less. Latency this
low can only be achieved over a very
short distance, which effectively mandates
placing intelligence locally, in the
vehicles and the roadside infrastructure
that supports them. As the technology
used for intelligent transportation
system communication is most likely
to be the cellular industry through its
“Cellular-Vehicle to Everything” (C-V2X)
architecture, AI will become a fundamental
element of AI at the edge in this
application.
To support all this data, network
topologies such as Cloud-RAN will be
complemented or replaced by virtualized
RAN (vRAN) along with edge computing
and integrated AI. C-RAN splits a
base station in two, with the baseband
unit performing processing (and soon
analytics), and the remote radio heads
delivering the RF portion of the system.
In contrast, vRAN realizes baseband
functions “virtually” in software, which
makes allocation of resources more
flexible so that resource allocation
can be made in near real time. 5G’s
expansion of cellular technology to
include IoT requires these resources to
be controlled at a local level to reduce
latency and improve the performance of
the systems it supports, a task that the
vRAN is designed to serve.
Another resource in the carrier
toolbox is network slicing that, among
other things, can make more “granular”
use of AI. Network slicing allows
multiple networks to run on top of a
single shared physical network, providing
an end-to-end virtual network and
letting carriers partition their resources
to allow multiple “tenants” to multiplex
their signals over a single physical infrastructure.
So, for example, traditional
high-speed cellular service, low-power
IoT, and low-latency applications could
be served by a single network, in slices.
The resources allocated to these three
very different applications can be adjusted
in near real time.
The benefits delivered by network
slicing appear like those from VPNs,
network function virtualization, and
other approaches. However, network
slicing has one benefit the others can’t
provide: the ability to generate additional
revenue. By leasing slices on a
long-or short-term basis, carriers can
create an entirely new market sector
tailored to the needs of customers
whose needs differ. As part of the package,
various levels of intelligence and
other data-centric resources such as
computational horsepower and storage
can be offered to these customers, as
needed.
SUMMARY
A decade or more from now, everyone
having anything to do with the development
of 5G will look back to 2019 as
the year when the massive amounts of
time, money, and sweat began to reveal
themselves in actual deployed systems.
By that time, small cells, AI, and dozens
of other technological breakthroughs
will have advanced dramatically, and
hopefully, millimeter-wave frequencies
will have proven themselves useful. Precisely
when it will be safe to reminisce
remains to be seen, as 5G is evolutionary
as well as revolutionary, so there
may not even be a need for something
called 6G.
www.mwee.com March - April 2019 MW 17
/www.mwee.com