Single Pass Cloud Engine (SPACE): The Key to Unlocking the True Value of SASE

Cato Space
Cato Space
Listen to post:
Getting your Trinity Audio player ready...

When Gartner introduced Secure Access Service Edge (SASE) in 2019, it caught the market by surprise. Unlike many advancements in technology, SASE wasn’t a new networking capability, or an answer to an unsolved security mystery. Rather, it was addressing a mundane, yet business-critical question: how can IT support the business with the expected security, performance, and agility in an era marked by growing technical and operational complexity?

Gartner has answered that question by describing a SASE architecture as the convergence of multiple WAN edge and network security capabilities, delivered via a global cloud service that enforces a common policy on all enterprise edges: users, locations, and applications.

This new architecture represented a major challenge for the incumbent vendors who dominated IT networking and security with a myriad of disjointed point solutions. It was their architectures and designs that were largely responsible for the pervasive complexity customers had to deal with over the past 20 years. Why was the SASE architecture such a challenge for them? Because following Gartner’s framework required a massive re-architecture of legacy products that were never built to support a converged, global cloud service.

This is exactly where Cato Networks created a new hope for customers, by innovating the Cato Single Pass Cloud Engine (SPACE). Cato SPACE is the core element of the Cato SASE architecture and was built from the ground up to power a global, scalable, and resilient SASE cloud service. Thousands of Cato SPACEs enable the Cato SASE Cloud to deliver the full set of networking and security capabilities to any user or application, anywhere in the world, at cloud scale, and as a service that is self-healing and self-maintaining.

Single Pass Cloud Engine: The Key to Unlocking the True Value of SASE (eBook)

Why Convergence and Cloud-Native Software are Key to True SASE Architecture

SASE was created as a cure to the complexity problem. Approaches that maintain separate point solutions remain marked by separate consoles, policies, configurations, sizing procedures and more. In short, they drive complexity into the IT lifecycle. Furthermore, such approaches introduce multiple points of failure and additional latency from decrypting, inspecting, and re-encrypting packets within every point solution.

Convergence was the first step in reducing complexity by replacing the many capabilities of multiple point solutions with a single software stack. The single software stack is easier to maintain, enables more efficient processing, streamlines management through single pane of glass, and more. Convergence, though, has strategic benefits, not just operational ones.

A converged stack can share context and enforce very rich policies to make more intelligent decisions on optimizing and securing traffic. This isn’t the case with point solutions that often have limited visibility depending on how they process traffic (e.g. proxy), and what kind of information was deemed necessary for the specific function they provide. For example, a quality of service engine may not be able to take identity information into account, and IPS rules will not consider the risk associated with accessing a particular cloud application.

Cloud-native is building on the value of convergence by enabling the scaling and distribution of the converged software stack. The converged stack is componentized and orchestrated to serve a very large number of enterprises and the traffic flowing from their users, locations, and applications to any destination on the WAN or Internet. The orchestration layer is also responsible for the globalization, scalability, and resiliency of the service by dynamically associating traffic with available processing capacity. This isn’t a mere retrofit of legacy product-based architectures, but rather a creation of a totally new service-based architecture.

Cato SPACE: The Secret Sauce Underpinning the Cato SASE Architecture

The Cato SASE Cloud is the global cloud service that serves Cato’s customers. Each enterprise organization is represented inside the Cato SASE Cloud as a virtual network that is dynamically assigned traffic processing capacity to optimize and secure the customer’s traffic from any edge to any destination.

The Cato SASE Cloud is built on a global network of Cato SASE Points of Presence (PoPs). Each PoP has multiple compute nodes each with multiple processing cores. Each core runs a copy of the Cato Single Pass Cloud Engine, Cato SPACE, the converged software stack that optimizes and secures all traffic according to customer policy.

These are the main attributes of the Cato SPACE:

  • Converged software stack, single-pass processing: The Cato SPACE handles all routing, optimization, acceleration, decryption, and deep packet inspection processing and decisions. Putting this in “traditional” product category terms, a Cato SPACE includes the capabilities of global route optimization, WAN and cloud access acceleration, and security as a service with next-generation firewall, secure web gateway, next-gen anti-malware, and IPS. Cato is continuously extending the software stack with additional capabilities, but always following the same SASE architectural framework.
  • Any customer, edge, flow: The Cato SPACE is not bound to any specific customer network or edge. Through a process of dynamic flow orchestration, a particular edge tunnel is assigned to the least busy Cato SPACE within the Cato SASE PoP closest to the customer edge. The Cato SPACE can therefore handle any number of tunnels from any number of customers and edges. This creates an inherently load balanced and agile environment with major advantages as we discuss below.
  • Just-in-time contextual policy enforcement: Once assigned to a Cato SPACE, the flow’s context is extracted, the relevant policy is dynamically pulled and associated with the flow, and traffic processing is performed according to this context and policy. The context itself is extremely broad and includes network, device, identity, application, and data attributes. The context is mapped into policies that can consider any attribute within any policy rule and are enforced by the Cato SPACE.
  • Cloud-scale: Each Cato SPACE can handle up to 2gbps of encrypted traffic from one or more edge tunnels with all security engines activated. Edge tunnels are seamlessly distributed within the Cato SASE Cloud and across Cato SPACEs to adapt to changes in the overall load. Capacity can be expanded by adding compute nodes to the PoPs as the Cato SPACEs are totally symmetrical and can be orchestrated into the service at any time.
  • Self-healing: Since Cato SPACEs are identical and operate just-in-time, any Cato SPACE can take over any tunnel served by any other Cato SPACE. The orchestration layer moves the tunnels across Cato SPACEs in case of failures. If a Cato PoP becomes unreachable, edge tunnels can migrate to a Cato SPACE in a different Cato SASE PoP. This can occur within the same region or across regions according to customer policy. Customers no longer have to design failover scenarios for their regional hubs, Cato SASE Cloud inherently provides that resiliency automatically.
  • Self-maintaining: Cato DevOps, Engineering and Security teams are responsible for maintaining all aspects of the Cato SASE Cloud. Software enhancements and fixes are applied in the background across all Cato PoPs and Cato SPACEs. New IPS rules are developed, tested, and deployed by Cato SOC to address emerging threats. Cato DevOps and NOC teams performs 24×7 monitoring of the service to ensure peak performance. Customers can, therefore, focus on policy configuration analytics using Cato’s management console that provides a single-pane-of-glass to the entire service.

Cato SPACEs vs Cloud Appliances: You can’t build a cloud service from a pile of boxes

Cato SPACEs are built for the cloud and cloud-hosted appliances are not. This means the use of appliances eliminates many of the agility, scalability, and resiliency advantages provided by a SASE service based on a cloud-native service architecture.

 

Capability  “Cloud” Appliance   Cato SPACE 
Single-pass Processing  Partial. This depends on the appliance’s software build, and how many other capabilities need to be service chained for a full solution.   Yes. All capabilities are always delivered within the Cato SPACE architectural framework.  
Any customer, edge, flow  No. Each customer is allocated one or more appliances in one or more cloud provider operating regions.   Yes. Any customer, edge, or flow can be served by any of the thousands of Cato SPACEs throughout the Cato SASE Cloud.  
Load balancing  No. The regional edges are hard bound to specific appliances. With limited or no load balancing, capacity must be sized properly to handle peak loads.   Yes. The cloud service orchestration layer load balances customers’ edges across Cato SPACEs.  
Cloud-scale  No. Appliances do not create a cloud scale architecture. The operating model assumes traffic variability is low so manual resizing is needed to expand processing capacity. The current limit of appliance-based SASE services is 500mbps.   Yes. Cato SPACEs are dynamically assigned edge tunnels to accommodate increase in load. This requires no service reconfiguration. Cato handles the capacity planning of deploying Cato SPACEs to ensure excess capacity is available throughout the cloud.  The current limit of Cato SPACE is 2gbps across one or more edge tunnels. 
Resiliency   Partial. Resiliency must be designed for specific customers based on expected appliance failover scenarios (HA pair inside a pop, standby appliance in alternate PoPs). Design must be tested to ensure it is working.   Yes. Cato automatically handles failover inside the service by migrating edge tunnels between Cato SPACEs within the same PoP or across PoPs. This is an automated capability that requires no human intervention or pre-planning. Cato implemented many lessons learned over the years on the best way to approach resiliency without disrupting ongoing application sessions. 
Globalization  Limited. Most SASE providers are relying on hyperscale cloud providers. Gartner warned that such designs will limit the reach of these SASE services to the hyper-scalers compute PoP footprint and roadmap.    Unlimited and growing. Cato deploys its own PoPs everywhere customers need our service to support their business. We control the choice of location, datacenter, and carriers to optimize for global and local routing. We also control IP geo localization and degree of sharing.  

SASE Architecture Matters. Choose Wisely.

SASE was called a transformational technology by Gartner for a reason. It changes the way IT delivers the entire networking and security capability to the business and the stakes are high. SASE functional capabilities will continue to grow over time with all vendors. But, without the right underlying architecture, enterprises will fail to realize the transformational power of SASE.

Cato is the pioneer of the SASE category. We created the ONLY architecture purposely built to deliver the value that SASE aims to create. Whether it is M&A, global expansion, new work models, emerging threats or business opportunities, with Cato’s true SASE architecture you are ready for whatever comes next.

Related Topics