The Properties of “Facilitating Services” that Could Save Telcos

I’ve said several times in my blog that I’m a fan of the notion of network operators’ providing “facilitating services”. This is a notion raised by AT&T, one I blogged about HERE, and the reason I like it is that it seems to strike a balance between operators’ fixation on providing traditional services and the fact that the revenue per bit for those services is (let’s face it) doomed. The question is what these services could be and how operators would go about offering them. I did a multi-blog series on the technology aspects of facilitating services, but I want to take a step back and look at the properties a facilitating service would have to offer to be valuable. From that, we may be able to derive some tracks that facilitating services could follow.

The first property is broad market utility. Putting it another way, the facilitating services must be targeted at the consumer space, because (as I’ve noted in my blog on Tuesday) business-specific data services are doomed niches. Something targeted there is simply not going to generate enough incremental revenue to make any investment worthwhile. Achieving a broad market utility goal divides into two paths. The first exploits what is already being done online, and the second exploits what is likely to come.

What’s already been done is a mixture of general Internet elements and content delivery elements. In the first category, DNS is already offered by almost every operator, but operators don’t charge for DNS so it’s not much of an opportunity. Same with DHCP. Many operators offer security services for consumers and business, but uptake is mixed because their offerings are usually simply the resale of something available elsewhere.

Content delivery facilitation is another option, particularly mobile content. Current CDN technology works well in wireline/fixed services because current CDN suppliers can peer with the broadband provider at a fixed point. Mobile roaming means that optimum caching may have to be changed depending on where the user is located. However, people consuming video don’t tend to roam far or quickly. In short, this area is currently covered enough that it’s probably too late for operators to jump in.

The second of our areas shows more promise, but also generates more risk. I’ve blogged for years about an area I called contextual services, services that would be designed to integrate applications and information tightly with the life-context of users. It’s probably not escaped many of those who’ve been reading my blog that the “digital-twin” framework I assert is the essential element in the metaverse concept is perhaps a modern basis for these new services.

Any service that’s intended to integrate with the real world needs to know what’s going on there, and needs to be able to organize and reference that knowledge in the way it interacts with the user. That means, IMHO, creating a “digital twin” of those real-world elements that are essential in supporting that interaction, and this is the piece of the metaverse concept that I believe is essential. Other layers build on it, in fact, and that means that a service that supported collection and modeling for digital twinning is a poster child for “facilitating service”.

Speaking of layers, there are opportunities for facilitating services above the basic digital-twinning layer. Most digital twins, especially those that support an individual-subjective service set, would likely reside close to the individual contracting for the service. However, we could say that above the digital-twinning layer is a visualization layer. The metaverse models reality, and how reality should be presented would depend on the nature of the higher-layer service. The Meta concept of a social metaverse is based on creating an alternate or virtual reality, and one attribute of that reality is a remapping of user geographies. A social metaverse creates a shared virtual place whose inhabitants are not sharing a physical location. Other metaverse concepts could have more limited need for “location remapping”, but all require some form of visualization, which is another facilitating service opportunity, and so is the “location remapping”.

A final point here is that digital-twin applications of any sort are likely to be very latency-sensitive, and that favors those who can provide hosting proximate to the inside edge of the access network. Obviously, network operators can do that, and so this point would favor their participation in the application set via facilitating services—edge hosting in this case. That has the advantage of being a very general service, one that doesn’t link the operator to a specific application and take it into competition with OTTs.

In all, some sort of digital-twin-metaverse concept seems to be the best opportunity for operators to target with their facilitating services.

The second of our properties is a close relationship to current operator services. That doesn’t mean voice; there’s no future opportunity there because OTT voice is simply too easy a market to enter. It means current broadband Internet, both wireless and wireline/fixed.

The question here is just what might constitute a “close relationship”. One possibility is that a facilitating service would exploit features of a current service. The feature of connectivity is an example, and it’s probably the most insidious trap of all traps for operators, and so of course they fall into it continually. We have connectivity today, and we’ve built whole industries, maybe whole economies, on the connectivity we have. So what new connectivity feature do operators think we need?

There is a feature that might be useful, though, and that’s the feature of location awareness. Wireline services carry the knowledge of where they terminate. Mobile services carry the knowledge of what towers they’re connecting through. The trick is our last property, avoiding regulatory pitfalls. Selling or even using the specific location of a user is a minefield of public policy issues. The good news is that there’s an alternative.

If lemmings are marching to the sea, versus happily grazing in a meadow, you don’t need to survey lemmings to find out where they’re going; the herd is going and it’s that truth that matters. Same is true, for example, for auto or pedestrian traffic. Do you care who, specifically, is congesting your route, or do you care that your route is congested and that patterns of traffic in and off your route suggest it will clear in twelve minutes? There is value in the knowledge of the broad movements of mobile users.

You could make a similar case for the patterns of usage of calls and the Internet, plotted against location. A given group of people will make and receive a number of calls and texts, and will perform a number of Internet searches. Suppose that activity levels suddenly change radically? Something is going on, and the something is visible enough to change mass behavior. That’s useful knowledge, which means it’s exploitable.

Another possibility is that the deployment of a facilitating service would be aided by real estate and technology associated with broadband deployment. That means, IMHO, that the smarts associated with facilitating services would reside in metro centers where economies of scale and service opportunities would both be high, and latency could be minimized.

The third property is a relatively high first cost and low ROI. Operators want to sell facilitating services at a profit, but their tolerance for low ROIs is higher than that of OTT players. Operators can also accept a fairly high “first cost”, meaning the cost to create a credible initial service over a viable market footprint. A high tolerance for marginal returns on investment is critical if you’re going to offer a wholesale service element and expect to make a profit. A high first cost is critical because you don’t want the retail providers rolling their own features, you want them to buy yours. If first costs are high it discourages those retail providers from cutting you out.

This is a more serious issue than most, including most operators, realize. There’s no point in investing in something that ends up being an attractive paper weight. The person who owns the customer, the retail provider operators are hoping to facilitate, has the most critical asset. That means the operator has to have a counterbalancing asset, and that asset (like the retail customer ownership) comes down to a financial value. You do what the other company needs, what they don’t want to do themselves. Otherwise you’re throwing money away.

The final property is avoiding regulatory pitfalls. Given the second of our properties, any meaningful facilitating service will have to be related to broadband Internet, and that area is fraught with regulatory uncertainty and has been for decades. Operators are justifiably reluctant to enter a service market whose investment is at risk if there’s a change of policy.

Facilitating service goals are easier to manage to dodge regulatory issues for the simple reason that most regulatory policy focuses on what the consumer is offered. However, there is a second-level vulnerability in terms of use and misuse of information.

On the “use” side, there’s all manner of regulations protecting consumer information and privacy. These rules would likely prevent a network operator from leveraging information they had as a result of the customer relationships they supported. If a hacker might want something, you probably don’t want to offer it as a facilitating service.

Relative to “misuse” the issue is that you could expect any facilitating service offered to be examined by malefactors to identify ways it could be exploited. If one is found, the repercussions could be dire if someone suffers financial loss or injury, or if it turns out that some aspect of the service doesn’t pass regulatory requirements.

This issue may be the most problematic for facilitating services, particularly for “digital-twin” services, because by nature these services require a level of personalization, which means they have to know who the user is and something about their life.

I think we could, as an industry, navigate through the issues and create valuable facilitating services, but doing that almost surely starts with recognizing and accepting the concept of a service-level partnership between operators and OTTs, something that’s really not part of the current mindset of either party. I hope that AT&T’s interest, which seems to me to be flagging a bit, is enough to get everyone thinking, at least. The concept, or lack of it, could change networking.

How Does NFV Really Relate to Feature Hosting?

Recent discussions on LinkedIn that relate to how NFV and O-RAN fit seem to show that there’s still a wide range of views on the way that hosted features relate to network services. Since that may well be the most important topic in all of networking, and surely is for network infrastructure, I want to weigh in on the issue.

NFV, or “network functions virtualization”, came about because of a 2012 Call for Action paper put out by ten network operators. If you look at the paper, and at the early end-to-end architecture model of NFV that’s still largely in place today, you find that the high-level goal of NFV was to substitute “virtual network functions” or VNFs for network appliances, or “physical network functions”. The paper in particular seems to focus on “appliances” rather than on the broader “network devices” category. The Call For Action paper, for example, talks about “many network equipment types” as the target. It also focuses on equipment other than switches and routers (see Figure 1 of the paper if you have access to it) and the functional software elements that replace “appliances” are called “virtual appliances” in that figure. All this is why I contend that NFV was targeted at replacing discrete devices with virtual devices.

The architectural focus of the white paper and the end-to-end model means in part that NFV management strategy was aligned with the goal of meshing VNFs with existing element management systems (EMSs) that were used for the appliances the VNFs would replace, and with OSS/BSS tools. Most of the early proof-of-concept projects approved for NFV also targeted “universal CPE”, which of course is almost exclusively relevant in the business service space. It’s that combination of alignments that I contend focuses NFV on customer-specific business service deployments.

This sort of stuff is understandable in light of the original NFV mission, but it collides not only with the way that hosting is used in later service standards (notably 5G), and even in some ways with the original Call for Action white paper. Some of the appliance types that the Figure 1 I previously referenced include are community- rather than customer-focused, and I contend those demand a different management model. That was in fact one of the things I addressed in the PoC I submitted (on behalf of a collection of vendors). I called services directed at community elements like 5G components “Infrastructure Services”, meaning that they represented services that were deployed as part of overall infrastructure, for community use, rather than per-user.

A traditional element management model implies that a single management element controls a specific network element, and where that network element is a shared resource that cannot be the case. In fact, any management interface to a shared element poses a risk that shared use will create a kind of “management-denial-of-service” attack, where too many users/services reference a management API and the workload causes the element to collapse. The notion of “derived operations”, which was based on the (sadly abandoned) IETF draft called “Infrastructure to Application Exposure” or i2aex, addressed this by maintaining a management database that was queried and updated by users through one of any number of software/virtual MIB proxies, but was alone referencing the actual MIBs.

The other issue presented by the NFV architecture is one I’ve mentioned before, which is a deviation from the cloud’s onward evolutionary processes. Cloud computing at the time the NFV launched was largely based on virtual machines and “infrastructure as a service” or IaaS. Today it’s much broader, but a major part of NFV was explicitly tied to the IaaS/VM approach. The notion of creating a functional unit (a virtual device or a VNF) by composing features was actually raised in the spring 2013 NFV ISG meeting, but it would have required vendors to “decompose” their current appliance software since applications weren’t built that way then. Today, of course, cloud applications are regularly implemented as compositions of feature components, and standards like 5G and O-RAN presume that same sort of feature-to-service relationship. To get terminological on you, NFV was about functions which are collections of features, and the cloud is about features and how to collect them.

My references to 5G and O-RAN here should demonstrate why I’m bothering with this topic now. The fact is that when NFV launched, we had network devices that we were trying to virtualize. Now, we have network features that we’re trying to integrate, and that sort of integration is exactly what the cloud is doing better and better every year. We have orchestration tools and concepts for the cloud that make NFV’s Management and Orchestration (MANO) look like Star Trek’s “bearskins and stone knives”. Projects like Nephio and Sylva are advancing cloud principles in the network space, and Sylva is explicitly targeting edge computing applications, in which group I’d place O-RAN.

As one of my LinkedIn contacts noted recently, we do have successful applications of NFV, examples of how it can be modernized. That doesn’t mean that we have a justification to continue to try to adapt it to current feature-hosting missions when we have cloud technology that’s not only addressing those missions already, it’s evolving as fast as new missions are emerging.

Standards groups are the wrong place to try to introduce software, and open-source projects are the right place. That doesn’t mean that simply morphing NFV into an open-source project would have done better, though. In my view, both Nephio and Sylva are current examples of open-source projects directed at a telco mission, and neither of them is proceeding in what I believe to be the optimum way. It’s always difficult to understand why that sort of thing happens, but my personal view is that telco participation in the projects is essential because telcos are the target, but telcos don’t have software architects driving their participation. As a result, they drive sub-optimal decisions without realizing it.

So is there no hope? I think there is, but I think that the “hope” we have is the hope that the processes we need will evolve without the telcos’ involvement, and that because of that those processes will be focused more on the public cloud than on the telco. There will be renewed cries of “disintermediation” from the telco side, of course, but in this evolving edge and function hosting model as in past service initiatives, the telcos have succeeded in disintermediating themselves. If they want to stabilize their profit per bit, they need to get with the software program in earnest.

A Planners’ Perspective on 5G

What’s wrong with 5G? There’s no question that it’s deploying broadly, after all. There’s no question that operators are committed to it. There’s no question that new smartphone models support it almost universally. There’s also no question that, like most tech these days, it’s been hyped mercilessly, and that (again, like most tech these days) it’s moved from the “it’s-everything-you-ever-wanted” to “it’s-a-total-wreck” phases of coverage in the media. I thought it might be interesting to see what thoughtful telco planners said about the technology and its future, so I culled my contact information to find out.

Back when 5G was first starting to deploy, those “thoughtful telco planners” believed that 5G was an essential evolution of cellular technology, just as 4G/LTE was. They also believed that 5G could support a new set of applications, and that it would also support new telco business models, but even though stories about 5G were already dancing on the edge of realism, planners didn’t see it as a revenue/profit revolution. That changed.

Over the last two years, as promotion of “5G benefits” grew in scope and stridency, the majority of planners got a bit more optimistic about the impact of 5G on revenues. This came about for two reasons. First, there was a growing buzz in the media, and that provided operators with (false) confidence in their supply-side visions. There was a tendency to apply the “Field of Dreams” theory of “build it and they will come” to 5G. In other words, it wasn’t that planners had any specific new plans to realize 5G revenue, as much as that they expected the availability of 5G to result in the almost-immediate realization of that “new set of applications” and “new telco business models”. Today, about two-thirds of thoughtful telco planners believe 5G will drive significant new revenue, and believe that revenue growth will come about because other parties will exploit 5G in various ways and because 6G will resolve all the latent benefit issues.

What about the other third? This group, made up largely of more junior, more technical, people, believes that new applications and business models exist, but have lost faith in the idea that simple 5G availability will drive others to find and exploit them. Some (a bit less than half of our third, or a sixth of planners overall) don’t believe that anything really new and revenue-significant will evolve in the next three years. Those that do believe it will think that operators themselves will have to take steps.

If you dig into the “others will do it or 6G will ensure that” view, you find that those who hold it are solidly entrenched in the supply-side view of the market. This is understandable in a way; data services in general, and consumer data services in particular, exploded once the technology needed was made available. More senior people, particularly the top-level planners, came up through the organization when network delivery alone constrained both worker information empowerment and new forms of consumer entertainment. They’ve not changed their mindset.

The younger planners who grew up after the Internet and information empowerment were in place are starting to wonder just what pent-up stuff could possibly be lukrking in the background awaiting the changes 5G would bring. Broadband has caught up with that early pent-up experience appetite. What replaces it?

Both groups got excited about the metaverse. For the old guard, you could align metaverse requirements with 5G’s differentiating features pretty easily, and the metaverse had the happy property of being the implementation responsibility of others. You want a metaverse? Great, we’ll connect it once you have it ready to connect. To the old guard, all the metaverse hype is proof that “they will come”. Now, they’re depressed because the coverage of the metaverse has already turned a bit negative.

What about the group that thinks that telcos will have to do something proactive, not just launch 5G and wait for people to leap into action to exploit it (and spend on it)? It would be lovely if I could report that this group was growing and evangelizing their position, shifting telco mindset and preparing us for some real change, but I can’t say that. The “do-somethings” are confused and divided. They’re as excited about the metaverse as the old-guard group, and they believe that telcos could help get it going, but just how to do that isn’t clear.

The largest of the divisions are planners who believe that open-source projects, substituting for traditional standards, are the answer. Even the Field of Dreamer two-thirds majority of planners agree that software is the key ingredient in new applications and business models to drive significant 5G revenues, and so there’s growing support for things like the Sylva Project, which I blogged about HERE. However, of the “do-something” group, support for the Sylva concept and similar initiatives is mixed.

5G’s real differentiator is latency. Thus, there really isn’t much debate that whatever will drive real 5G revenue gains will have to be associated with edge computing. The question is whether edge computing is an application, or another layer on the Build-It model, but still a long way from the top. Most of the do-something third of telco planners think that Sylva and other edge-linked initiatives are exercises in bottom-up design, and apart from software architects’ disdain for bottom-up thinking, the problem is that the actual benefits of the technology are still up in the clouds, where nobody can really see them or see who’s responsible for making them happen.

The metaverse is obviously a “top” element, but it’s so high up the food chain that even the young and Internet-savvy planners are having a problem deciding just what would help it to deploy. So, the majority are hoping (and waiting) for some consensus architecture to emerge. If it doesn’t….

But is this just an argument over development strategy, or is there something fundamental at stake? Even those do-something planners who believe that Sylva and similar projects still miss the critical piece—the ultimate demand that the metaverse might epitomize—there’s a lack of a sense of the specific risk involved. “We need top-down” is a conclusion, not a justification. Why do you need it? I don’t get good responses on that question.

Why do we need top-down? Here’s my personal list of reasons.

First, I think that an edge-computing shift only changes what we’re Building and waiting for Them to Come. Edge computing is a hosting strategy not an application. Is dependence on simply hosting an unspecified set of applications any less risky than dependence on connecting them? Don’t you have to contribute something functional?

Second, I think that if you consider the hosting requirements for edge computing without any specific application set to target, you risk dumbing down the feature set. Yes, edge hosting means having the ability to host at the edge, but just saying “container model” or “function/lambda” isn’t enough. Most of today’s applications, in or out of the cloud, utilize some “middleware” tools that facilitate consistency in design and reduce development efforts. What tools will our hypothetical “Theys”, like metaverse applications, need? We don’t know because we don’t know that the applications are.

Third, edge computing is its own hype target, and in fact has likely passed the point of maximum impact. This explains why Sylva got essentially zero publicity in the trade publications. Given that, it’s not likely to be hailed as the savior of 5G’s value proposition, which means that 1) the media may turn totally negative on both edge computing and 5G, and 2) they may start the 6G hype cycle in earnest. We’re already seeing 6G stories, so that wouldn’t be difficult, and a focus on 6G decouples any 5G work from favorable market perceptions.

The final question is what the do-something group thinks is going to happen, and here things are a bit discouraging. Do they think that the number of do-somethings will grow? Yes, but only a little bit, and only because of the retirement of some of the other group and the hiring of new people. Do they think that anything can change the minds of the majority? No. Do they believe that the current practices will lead to significant 5G revenues within the next three years. No. What could solve this 5G revenue problem? Well, let’s see.

The responses here are a mixture. Some say IoT, some say the metaverse, some say “facilitating services”, and in all these responses there’s a risk of my findings being influenced by my own views. I don’t survey all these kinds of people in a statistically defensible way. The great majority are people who know me and most of these read my blogs. Some, in fact, have gotten to know me because they’ve contacted me in response to something I’ve said there. Thus, I can’t present my findings in this critical area as being unbiased. But I think they’re realistic and relevant, and time will tell whether I’m right.

Consumer Broadband Technology is Winning

If we were to identify the most significant trend in networking, the thing that has the greatest impact on 2023 and beyond, what would it be? In my view, it would be the consumerization of networking. There was a time when business services were the major driver of network data services, but that time is now passing. In fact, it’s pretty much passed from the perspective of operator and vendor planning, at least for the enlightened players. What matters now is the consumer, and consumer-targeted services will now become the baseline services for businesses as well, not immediately but inevitably.

The biggest reason behind this important trend is a simple matter of numbers. There are, in the US for example, about seven and a half million business sites, of which about a million and a half are associated with businesses that have multiple sites, and about fifty thousand are associated with “central sites”. In contrast, there are about one hundred thirty million households, of which about one hundred and eighteen million have broadband. My modeling says that, today, just about one percent of broadband connections in the US are made to business sites.

Residential broadband is not only pervasive, it’s getting better. The baseline for modern broadband Internet is 50 Mbps and many areas have gigabit service options. This, when back 20 years ago, a major company headquarters might have 45 Mbps T3 services (in point of fact, there were only about eight thousand such locations with that capacity in the US). The cost for a broadband Internet connection is a very small fraction of the cost of business broadband, too.

Finally, a key driver of residential broadband is increased “Internet tolerance” associated with the explosion in online shopping. Almost all residential Internet users will do product research online, and about 85% seem to do at least some online shopping. This means that companies are probably relying on the Internet to support sales, and that in turn means that they’ve accepted the QoS limitations of residential broadband for their sales overall. That makes them less anxious about shifting what was traditionally internal company traffic to the Internet, perhaps with added isolation and security via SD-WAN/SASE. “Less anxious” doesn’t mean an instant transformation to a consumeristic network model, but it does mean that the transformation is happening now, and will accelerate over the next three years.

This shift has major consequences, the most obvious of which is that residential broadband access technology becomes the only significant wireline infrastructure, and being a broad player in networking depends increasingly on tapping into that somehow. Every player doesn’t need to be a broad player, of course, but there’s going to be increased pressure on most to at least have a role in consumer broadband, and you can see that with Ciena.

Ciena announced on November 22 that it had acquired Benu Networks and entered into an agreement for Tibit Communications, with the goal of enhancing its position in residential broadband. Ciena has other options to increase its market footprint, as I’ll talk about below, but it’s found it necessary to get into the consumer broadband space in a more serious (and closer-to-the-user) way. That’s likely because competitors who did enter the space would be at an advantage if Ciena didn’t counter the move. Access is the biggest consumer of fiber, and an optical player needs to be supporting the dominant technology.

The shift of focus to residential broadband doesn’t necessarily mean the death of business broadband, but it does likely mean that MPLS VPNs will begin to decline. However, it is possible (even likely) that operators will look for a way to use residential broadband infrastructure to deliver VPN technology, likely through a combination of separated business connectivity in the access network and SD-WAN on-ramps to replace MPLS. This facilitates a shift away from the current VPN gateway routers to appliances or even hosted instances. There is, for example, no reason why the SASE-like SD-WAN technology used in the cloud couldn’t be used as a cloud-hosted VPN on-ramp to a specialized business broadband connection. It could also be used as an on-ramp to traditional SD-WAN-over-the-Internet, of course.

That takes us back to the point about Internet QoS and “best-efforts-is-good-enough”. Remember that wave of online influence on sales? Well, we deliver material through the Internet and the cloud, and we’ve adapted both the software involved and the interaction models of our software to the limits of “best efforts”. It works. Now we’re seeing more applications that support workers rather than customers shift to the same model, because of remote work and also because the Internet/cloud approach delivers a rich GUI. This effort is also proving that the Internet can support “mission-critical” interactions. And the Internet is available; there is nothing technical or regulatory that stands in the way of businesses shifting all their traffic to the Internet. Yes, it might mean changing technologies and moving security measures around, but it’s feasible. And the Internet is cheaper, so ultimately it will win.

One could reasonably ask what the industry thinks of this. What I hear from all my contacts is interesting. Among network operators, I find that the majority of the junior-level people see things pretty much as I’ve described, and the senior-level people reluctantly agree. However, the juniors are of the view that this shift will be decisive by 2024 and the seniors think it might be decisive by 2027. Among network vendors, I see a similar divergence of viewpoint, but based perhaps a bit more on role. Strategy players and engineers who are in emerging-technology areas see things like junior operator types, and management and engineers involved in traditional product areas seem to be locked into the operator-senior viewpoint.

Ciena is interesting here, in that they are taking steps now that clearly required senior management approval. Not only that, there’s a whole other set of network evolutions driven by consumerism, one being the potential metro bonanza. If metro centers become the places where edge computing is hosted, then could the core be an optical mesh of those locations? A full mesh of the roughly-250 major metro centers in the US would require about 63 thousand fiber strands, but my model says that a two-tier structure would require less than 11,000 and a three-tier structure less than a thousand. With packet optics you could thus have all the edges only three optical hops max from any other. That would validate almost all of Ciena’s current product line, so why not bet there?

Answer: Because Ciena wants a “natural opportunity” and they’d have to drive metro to make a go of that space. While metro positioning by major network vendors is currently sub-optimal IMHO, if Ciena made a sincere effort to promote a new metro model, only two outcomes would be possible. First, their inherent product limitations (primarily optical, little data center exposure) would mean they’d fail. Second, they’d get a good story out, and their packet-product competitors (Cisco and Juniper) would then be motivated to jump in, and Ciena would be out-competed again.

So we’ve had a quiet revolution. If there is no mass market, then whatever market has the most mass gets the most attention. If there is a true mass market, then eventually it eats all the other markets in terms of opportunity, and it becomes difficult to play even in a niche without a position in the mainstream. That’s where Ciena is, and where every network vendor is. The times aren’t changing, they’ve already changed.

A Promising Tech Publication Shuts Down: Why?

Back in 2019, the publishers of Politico announced they were launching a new tech publication, called “Protocol”. It came out early in 2020, and earlier in November of 2022 it announced it was ceasing publication. Since Politico is a highly successful and respected publication in the national and international political scene, how come their tech effort failed? I had high hopes for Protocol and an exchange with the first editor in February 2020 when the first issues came out. Take a look at my referenced blog, and then let’s dig in.

My view on tech coverage is that, in a market where we’re surely trying to build an ecosystem as complex as any in human history, we’re “groping the elephant”. Remember that old saw about someone trying to identify an elephant behind a curtain by reaching in and feeling around? Get the trunk and it’s a snake, get the leg and you think “tree”, and the side would lead you to believe you were feeling a cliff. You can’t define an ecosystem if you look only at parts. My view, which I conveyed in email, was that more than anything else, tech coverage lacked context, and that Protocol needed to provide it. The response I got was “I completely agree — this is one of the things we want to do really well, making sure we try to tell the whole story instead of tiny pieces of it.” Well, I don’t think they did that, and I offered examples from the early story to justify my view.

An implicit point in my assessment of Protocol is that the publication had an opportunity, which means there was an unmet need. I won’t bore you with the details of what I think the need is; all my blogs focus on that. What that leaves is the question of why the need is unmet, why tech publications (these days that means online publications) aren’t doing what I believe the market needs, so let’s look at that.

To be holistic, you have to understand the whole, and in tech that’s incredibly complicated. But you also have to understand the relationships that turn “the whole” from a collection of boxes and software to a functioning infrastructure that supports some viable mission set. I know a lot of tech journalists, and I think most of them would agree that actually understanding the specific area they cover is a major challenge. Understanding all the areas and how they relate to each other? Forget it.

So is any attempt to cover tech, to convey developments in context doomed? I don’t think so. I think my tech journalists would also agree that if they had an outline that represented the framework into which their stories fit, one that provided that critical knowledge of elements, relationships, and context, they could do their stories in a way that would meet the needs of the market. They don’t get that, and they surely could because editors (including those who ran Protocol) could have talked to people and assembled the view. They could still do that today, but they don’t. Why?

Back in 1989 when I first started to do surveys of enterprises in a methodical way (to populate my forecast model), there were about eleven thousand real qualified network decision-makers. The number of subscribers to the best network publications of the time was about the same number. Ten years later, the number of qualified decision-makers had increased to thirteen thousand five hundred, and the circulation of publications had increased to over fifty thousand. The reason for this was that publications shifted from being subscription-based to ad-based. You filled out a reader service card, answering questions, and from those answers, the publication decided if you were qualified to get a free copy. Sound logical?

Maybe not. Here you are, a lowly tech in some vast organization, with about as much influence on the decisions made as the person who operates the coffee shop nearest the headquarters. One question on the card is “What value of technology do you personally approve or influence?” and you get a range. Pick the truth (zero) and you’ll never see that publication unless you steal a copy from someone else. So you pick (on the average, according to my research) whatever level is about two thirds of the way up from the bottom. This strategy gets you the publication, but it also means that the total purchase influence value of subscribers exceeds global GDP, which isn’t exactly plausible. It does explain how we jumped so far in “influencers” and subscribers, though.

OK, so we printed more copies than we really needed; so what? The right people still got the news, the ads were effective. Then along came online. Now we had the same explosion in unqualified people (meaning people who weren’t actually making decisions), but we could also tell what they were interested in, which we couldn’t easily do with a printed publication.

Ah, and remember that advertisers pay for eyeballs. Now, suppose I have fifteen thousand decision-makers and fifty-thousand hangers-on. I do a long, well-contexted, article that’s rich fodder for the former group, and the latter group tunes out. I have fifteen thousand eyeball-hits. On the other hand, if I do a “man-bites-dog” sensational piece, I get all sixty-five thousand. Why? The hangers-on want digested, exciting stuff, so they’re happy. The real decision makers have nothing else to read, so we get them too.

This isn’t an easy problem to solve, and I’m not sure I’m qualified to suggest a solution. My blog gets roughly a hundred thousand fairly regular readers, but as you know I don’t accept ads, or compensation for running specific stories there. I’m free to do what I want, which is not the case for “real” online tech publications that have to pay employees, website hosting bills, and so forth. I write everything myself, from my own knowledge and experience, so there’s no outside cost for me to cover. But even with all of this, I understand how things would be for an ad-sponsored blog. You get paid by click, therefore you cater to clicks.

Protocol’s challenge is that they came from a background of news, and news is widely digestible and broadly understood. Tech is not; in fact tech understanding is probably what a new tech publication should be trying to convey to readers. Tech understanding means what, though? Does it mean providing enough information to make a truly objective assessment of a technology and the vendor space associated with it? No advertiser wants that, they want something that preferences their own products/services. What Protocol ended up doing was a kind of news slant on tech, and while it was useful to readers I don’t think it offered advertisers the kind of thing they wanted out of the stories, which was something that mentioned them, or at least was favorable to their buying proposition. But that approach would first replicate everything that was already out there when Protocol launched, and second miss the critical goal of actually helping the tech buyer apply technology to business problems, and so justify their purchases. That would require addressing a much smaller audience, and that defies ad sponsorship principles that focus on eyeball counting.

Protocol was launched by the people who gave us Politico, but political news touches everyone and doesn’t require special skills to digest. You can sell an ad on a political website and be assured that millions could be reasonable targets for it. Can we make the same assumption about technology sites, technology ads? No, because only those who influence big tech purchases are viable ad targets. So is there no niche for Protocol to have filled? I think there was, and I think that niche was to advance buyer literacy among those real buyers.

Let me offer some insight I dug out of my old survey data. Back in 1998, almost 90% of decision-makers said that they fully understood the technology they were buying and how to apply it to their problem set. Ten years later, only 64% said that, and today only 39% say that. It’s hard for me to believe that, if we had the same level of tech literacy in 2022 that we had in 1998, we wouldn’t be way further along in tech revolution than we are. We’d be selling more tech products and services, company stocks would be higher, and tech employees and investors would have more money. Seems good to me.

For vendors, this frames a dilemma that I mentioned last week in my blog on Cisco and Juniper, the issue of sales versus marketing. What’s the difference between being sales-driven and being marketing-driven? Salespeople are commissioned to sell, not to educate. They don’t want to spend a lot of time in a sales call, they want to get the order and move on to the next opportunity. Say “consultative sale” or “buyer education” and they blanch. But if a new technology comes along, how do the decision-makers get the literacy they need to pick a product and get the deal approved internally? The best answer would be “marketing”.

Marketing is a mass activity not a personal one. You create marketing collateral and get it to a decision-maker, and you can educate them, indoctrinate them, support them in their mission, in a way you’d have a lot of trouble getting your sales force to support. Marketing is the great under-utilized resource in tech, and it’s the thing that can really drive market change. For vendors who aren’t major market players, marketing is what can make you into one, and every such vendor needs to accept and exploit that truth. Why? So they don’t go the way of Protocol.

Could an Up-and-Coming Vendor Gain Traction in Networking?

In my blog on Monday, I talked about the battle of the two giant IP network equipment vendors, Cisco and Juniper. The two, I said, are battling it out in a sales-driven arena and neither are pushing all the buttons they could on the marketing side. That raises a question, which is whether a newcomer could step up, and use marketing techniques the others haven’t fully exploited to gain a lot of traction. Is the “next Cisco” really a possibility?

Obviously, newcomers have succeeded in network equipment in the past. Cisco and Juniper both had to claw their ways into a strong position in the space, battling incumbent vendors. In Juniper’s case, it was Cisco. In Cisco’s case, you might be surprised to learn the incumbent was IBM. In both cases, the wannabe vendor used a specific and easily understood criticism of the incumbent, then leveraged it.

Cisco’s rise came about because IBM’s network technology was simply priced too high. Interestingly, a part of that was due to the fact that IBM’s technology (System Network Architecture or SNA, if you’re interested) was inherently highly secure and reliable, which the market of the time needed but the emerging Internet didn’t. IP routing was a heck of a lot cheaper, and that generated a business shift to IP. That, and the growth of the Internet, then propelled Cisco forward.

IBM SNA was proprietary technology, at least in that it wasn’t based on formal standards. Cisco’s IP stuff was initially based in part on the same sort of thing, and Juniper entered the market by capitalizing on the fact that there was an increased demand for standards-based networking. Juniper was particularly effective in promoting its technology to the service providers, and from there they expanded their reach.

So can there be a newcomer, a “unicorn” vendor that could threaten to at least steal some market share from these giants? A decade or two ago, there was strong interest in the VC space to come up with “the next Cisco”, largely centered in startups in the Boston area, but it didn’t generate anything notable. More recently, competition to our two network giants has come from something more diffuse than unicorn-like, the “white box” or “open-model” approach.

You could say that this is a further step along the “standards” path, but unlike the old standards differentiator, open-model networking is attractive even to some enterprises. There’s been router software available from a variety of sources, some open and some licensed, for well over a decade. When custom chips capable of pushing a lot of packets came along (Broadcom is the leader in this space), and software-defined networking (SDN) took shape, the result was a push for open hardware that could be married to the router software.

The question is whether “open-model” or “white-box” presents a compelling value of the sort that launched Cisco and Juniper. You can argue that history says “No!” because white-box router technology, or white-box technology in general, hasn’t taken off in the enterprise space and has met with limited success in the service provider market (more on that below). Why? Two reasons, one practical and one subjective.

The practical reason is the buyers’ concerns about integration, which operate at two levels. First, a white box is a space heater without software, and if the new router model is really open, the software has to come from any source, which means integration is important. As it happens, nobody wants to do that or pay for someone else to do it. Second, networking today is more mature than it was when Cisco and Juniper launched, and a mature market has a large base of devices not fully depreciated. A new source means integrating with a mass of stuff already in use, and that’s a headache too.

Maybe a different tagline is needed here, and there are a couple of ideas floating out there. One is from startup/unicorn DriveNets, who offers the “Network Cloud” and the other from Juniper, with “Cloud Metro”. It doesn’t take a PR genius to notice that “cloud” is a common theme, and it’s a smart one because a great majority of both enterprise and service provider network planners say that the cloud is impacting networks and network infrastructure.

DriveNets is a disaggregated or cluster-router model, where a collection of devices connected in a mesh becomes in effect one high-capacity device or even a series of virtual devices hosted on a single cluster. DriveNets is the most successful of the Cisco/Juniper IP infrastructure competitors, but the company has focused on the IP core network, in no small part because AT&T played a big role in getting the company launched. The problems with this are 1) that there aren’t as many core devices as devices at other network levels, and 2) the core is arguably moving toward agile optics. Still, for the service provider space, DriveNets is a real contender.

The question of how many routers a newcomer could sell raises the other tagline, “Cloud Metro” and the topic of metro networks overall. Metro is (as I’ve noted in past blogs) a kind of sweet spot for service provider networking. There are a lot of potential metro concentration points, as many as a hundred thousand worldwide. Each of these serves enough customers to make it a viable point for service feature hosting, edge computing, and other interesting stuff. Juniper grabbed the notion first almost two years ago, but hasn’t developed it as much as they could have. DriveNets’ architecture would also be a great fit for metro, but they’ve not really exploited that capability either. Could another startup or even a smaller vendor take advantage of that lack of metro focus? Perhaps.

The problem is that “metro” is a service provider infrastructure element, and there’s increased market interest in enterprise-compatible products. The service provider sales cycle is between 10 and 19 months at the moment (up from 9 to 14 months, where it was largely stuck for most of the last two decades), where the average enterprise network deal is done in between 4 and 9 months. Enterprises, though, are focusing network infrastructure-building on the data center (switching rather than routing), security, and VPN edge technology. Security is the easiest of the three to sell.

So who’s the most important competitor to Cisco and Juniper? Given the need for a strong enterprise focus, probably the almost-finalized Broadcom/VMware combination. Broadcom has the chips. VMware has a nice inventory of virtual-network technology that plays into the way enterprise networking is moving. They also have a significant foothold in enterprise data centers, which is critical. Their biggest handicap here is a combination of positioning and organization, and the two are likely related.

You’re probably not surprised that I’d criticize their positioning; whose don’t I criticize, after all? Here, though, the challenge is that there are some significant mindset changes needed to promote VMware’s position, and those can only come about through some really aggressive marketing. VMware doesn’t have that history, and they have a big positioning advantage in multi-cloud and cloud portability they could leverage.

VMware’s enterprise networking position is definitely cloud-centric, which is a good thing. They have a strong virtual-network story for the data center (NSX) and in SD-WAN, and they’re starting to integrate the two. Their security portfolio is good, but they lack the security focus of vendors like Cisco and Juniper, and even their virtual-network cloud stuff gets a bit blurred in positioning relative to vSphere.

VMware isn’t going to steal routers’ thunder, but for the enterprise a router is increasingly just an edge device, and if MPLS VPNs do fall out of favor because of cloud networks and SD-WAN, the majority of them would disappear. That means that the enterprise network might increasingly be moving to appliances and hosted instances, which is VMware’s strength., It’s hard to say how quickly this all could happen, but if I’m correct in my views, I think we’ll see clear signs by the end of 2023. Meanwhile, Happy Thanksgiving!

Cisco Versus Juniper: How’s that Shaking Out?

Cisco and Juniper are both key players in the network equipment space, for slightly different reasons. Both had good quarters and were rewarded by Wall Street, but there have always been major differences between the style of the two companies. Whether those differences are widening or narrowing is important both to the competitors themselves and to the market at large, so today we’re going to look at those differences and what they might mean.

First, let’s look at the numbers. Cisco defines six product areas, and they were up in three of them. Juniper defines four product areas and they were up in all four. Both companies benefited from a reduction in order backlogs created by easing supply-chain issues. Juniper, based on my input from the Street, was generally rated lower than Cisco and generated a bit more of an upside than expected, but I think their objective financial performance was better. The difference in Street viewpoint and that potentially improved upside on Juniper’s part are the things we have to look at now.

Cisco, as I’ve noted before, is a sales machine. Their approach has been pretty consistent over the last couple decades, in my view. They focus on making the deal, on the current quarter and making sure not to undermine it, and on making sure they do undermine competitor initiatives aiming at rocking the boat. The company doesn’t innovate as much as execute, and its ability to consistently turn in good numbers has made it attractive to the Street.

Juniper is in some senses the same, in that they’ve tended to respond to their market-leading competitor in taking a sales focus. That leads to their being characterized by many on the Street as “playing Cisco’s game”, and given Cisco’s strength in sales, that’s a sub-optimal approach. That likely accounts for Juniper’s lower Street-cred, so to speak. On the other hand, Juniper has made some incredibly smart product-strategy moves, especially in M&A. Of the two, I believe they have the best product portfolio, and by a decent margin.

Who wins in 2023? To decide that we need a formula to define what a win would require, so I’m going to have to propose a model. In my view, you start with a broad vision of market evolution that frames your value proposition. You add a network model that fits that vision, and close with a product set that fills out the model, a marketing position that evangelizes the vision and product set, and a sales strategy that frames current buyer needs into the market vision, thus fits into the first two things. Do our vendors have that? Let’s start with my own view of what’s going on in the markets.

For most of modern times, commerce has been driven by processes initiated and controlled by the seller. You read an ad in a magazine, you went to a store or you ticked a reader service card to get information, but the real process got started when you encountered product information the seller provided explicitly, and the sales process was controlled by a retail outlet and/or a salesperson. A company’s IT process and network had to run the business, but that meant largely supporting non-real-time steps.

Today, if you want information on a product or vendor, you go online. If you want pricing you go online. If you want to buy something, you’re increasingly likely to go online. Commerce now takes place in real time, driven by the buyer’s attention and drawing on information resources the seller presents not explicitly to you, but to the market at large.

This shift has enormous significance, because human participation in the normal flow of online commerce doesn’t exist, can’t exist for the process to work efficiently. Information technology is the instrument of commerce now. It’s not just supporting the business, it’s the instrument whereby marketing and sales are realized. This is “mission-critical” at a whole new level, a level where “the normal flow” is what generates revenue, and where sustaining that flow in real time, for all is the fundamental mission of the company.

Historically, nothing in IT has worked that way, including networking. Historically, something breaks and humans have to cooperate to get it fixed, sometimes replacing things, rerouting, rewiring. Historically, we could not assume that a remote transaction was authoritative until the paper copy validation caught up. Historically, we could protect goods, records, and bank accounts with armed guards. None of those historical assumptions hold up in today’s world. We need a new basis for the fundamental promises that make up successful commerce, successful economies, and a big part of that new basis has to come from the network.

OK, this is my view of a good vision statement. What do our competitors offer? Neither has this kind of high-level view, but what views do they have? It’s somewhat difficult to say, because both companies tend to take sales messages into their marketing/positioning channels rather than articulating strategic messages. That also applies to their websites, which should reflect the issues they believe are driving network technology planning.

Cisco, IMHO, presents no vision of the evolution of the market at all. Their homepage dives into security immediately (one of the areas where they indicated they would prioritize for shareholder value reasons). Digging deeper in their site, you still find no statements of market evolution or buyer need. This is totally consistent with Cisco’s sales-centric approach, but it leaves an opportunity door wide open.

Juniper doesn’t do much better on their website, but they have articulated something that at least can be called a high-level vision, “Experience-Based Networking”. Not only is that a tagline that could be linked to the market vision I opened with, it’s also one that supports the evolution from the old model to the new. All of that is good news, but Juniper doesn’t make the positioning connection strongly (their tagline isn’t immediately visible on their website, for example) so I guess it would have to be considered “potential good news.”

Let’s move down a level now, and construct a network model to support the vision. For reasons that will become clear, I’m going to conflate this with the product-map-to-model step (one more layer down).

You need quite a few things in a network model, including enhanced management to reduce outages and impacts, tight integration between cloud and network, tight integration between data center and network, and a high level of network portability across multiple infrastructures/operators. All of these things are aimed at ensuring that the online experience is highly reliable, presents a consistently high QoE, can be efficiently linked to applications, and can be delivered over the Internet, a VPN, or a private network.

Cisco defines no particular model or vision to achieve these goals, hardly a surprise since they don’t define high-level goals at all. However, Cisco’s product line does have the elements that could address these specific model elements. What both enterprises and service providers tell me is that Cisco tends to focus on product sales rather than on network model, which again isn’t a surprise. They seem to believe that if a given product is needed, the prospects will decide their model and ask for products to fill it out. This, again, is reasonable on the surface, but risks strategic intercession by competition.

Juniper does have a network model, and it does in my view, a better job of aligning to the reference network model I described above, one that I think is better aligned with real market trends. Juniper also biases its website and positioning more to the model level. For example, their most visible positioning strategy is to promote AI and the cloud integration of network and other telemetry to enhance visibility and management. Mist AI is a strong product, and the notion that AI could enhance operational responses to network issues is congruent with the tagline (Experience-Based Networks) and with my presumptive mission and network models.

Enterprises and service providers who have commented to me about the two companies place Juniper at the top for “innovation” and Cisco at the top on “execution”. They confirm that Cisco is more likely to have account control and influence, given their market-leader status, but that Juniper likely has technologies that better fit conditions and how they’re likely to evolve.

One new factor in the competitive mix is Cisco’s restructuring announcement. While it will include layoffs, the major point that the company raised was a realignment of effort toward profitable segments, to enhance shareholder value. This has been interpreted by some financial news services as a shift more to an enterprise focus, and by some Street analysts as a move to sustain and improve share prices. It’s also possible that Cisco is reacting to Juniper’s success, consistent with its normal goal of being not a leader in tech but a “fast follower” who will exploit (and step on) the success of others.

This whole swirling mix of points suggests to me that 2023 will evolve in two stages. The first, which I think will last into the mid-spring timeframe, will be a gradual “evolution” of the two companies’ current positioning and strategies. The timing of this roughly aligns with what I expect will happen in the global economy, as the inflation and rate hikes shocks dissipate. I don’t expect a major change, particularly with Cisco, but just what they mean in their restructuring story will become clear.

The second stage is what I think we could call the “awareness” stage, where network buyers will respond to the developing conditions in the market, and both Cisco and Juniper will have to respond to changes in attitude. I believe that the evolutionary model for the role of the network that I opened with here is now emerging to the point where at least some planners in both our competitors now see the conditions. Of the two, as I’ve said here, Juniper seems best-positioned to address the future, and they are even now a bit more willing to be strategically innovative than Cisco. That means that they could, in the “awareness” stage next year, jump out and change the dynamic of their space, at least a bit.

What’s behind all my “could” qualifiers here is the fact that “awareness” happens at the pace of marketing. Aggressive positioning leads to more media engagement, which leads to more website visits that can set agenda points for network planning. That leads to sales calls that are conveying a solution to a problem the vendor itself has defined, and so are very likely to fit. And that leads to a change in dynamic. All this stuff starts with that aggression, and neither of the companies have shown aggression in the last five years or so. The “awareness” phase of 2023, and the advantage in 2024 and beyond, will lay with the vendor who comes out of their shell first, fastest, and best.

RSS
Follow by Email