Skip to main content
Terraforming Alkira and Fortinet is Multicloud Bliss
  1. Posts/

Terraforming Alkira and Fortinet is Multicloud Bliss

William Collins
Author
William Collins
Building at the intersection of cloud, automation, and AI. Host of The Cloud Gambit podcast.
Table of Contents

There is a reason why enterprises prefer the best-of-breed approach to connect and secure their network and intellectual property. Alkira announced its integration with Fortinet at AWS re:Inforce in July, and this is a perfect example of the best in action. As anyone that reads my blog knows, I have an automation first approach to everything. Alkira’s Terraform Provider is Fortinet ready, so let’s take it for a spin!

Intro
Intro

Key Features
#

This partnership comes packed with great features, including the seamless integration of FortiManager (which orchestrates the Fortinet Security Fabric), extending existing firewall zones into and across clouds with auto-mapping of zones-to-groups, and weathering traffic surges with auto-scaling.

Alkira and Firewall State

Coming from the Data Center world, configuring firewalls for high-availability has its challenges. Since firewalls are stateful, if traffic ingresses firewall-a and egresses firewall-b, you break state since firewall-b has no session. With Alkira, FortiGate instances run active/active and traffic symmetry is handled natively. Consistent hashing is inserted into the forwarding layer to detect failures and transition to available instances.

The Plan
#

For this exercise, I decided to include a mix of Hybrid Multi-Cloud and Multi-Region to play with, and why not cross continents too? I already have an SD-WAN fabric extended into East US and Central EU regions along with an IPSEC tunnel. For cloud, I have several networks connected from AWS and Azure with Multi-Cloud Internet Exit. This gives us plenty of options for selectively forwarding traffic to the FortiGates for east/west, north/south, and internet bound traffic.

Existing Topology
#

Topology
Topology

Scoping Criteria
#

First, let’s scope out the basics of what we want to deploy:

  • FortiGates running FortiOS: 7.0.3 deployed in all regions
  • Register with on-premises FortiManager
  • Minimum: 2 / Maximum: 4 auto-scaling configuration
  • Extend existing on-premises FortiGate zones across all clouds

Policy Criteria
#

Second, let’s define what traffic we want to steer through the FortiGates and what should be left alone.

  • Deny any-to-any by default
  • Non-Prod can talk to Non-Prod directly but must pass through a FortiGate when talking to Prod
  • Non-Prod can egress to the internet directly, but Prod must first pass through a FortiGate
  • Partner can talk to migration cloud networks only but must first pass through a FortiGate
  • Corporate can talk to all cloud networks but must first pass through a FortiGate

The partner requirement above is pretty standard. As organizations look to modernize using the public cloud, often, they will work with a preferred partner to give them a strong start and avoid mistakes in the beginning that would otherwise set them back. These partners generally have access to the environments in scope for modernization only.

Why not forward everything to Firewalls?

It may be tempting to forward all traffic through NGFWs. Meeting compliance requirements, especially when protected data is involved, can be challenging. However, it does not make sense when examining cloud principles and cost. Identify traffic required to transit firewalls if possible for compliance or other meaningful reasons. If you have sandbox cloud networks that have no access to data or any other intellectual property, does it make sense to forward all that traffic through a firewall?

Let’s Build!
#

We could create separate resource blocks for each FortiGate we want to deploy, but that would be unsightly. For this example, let’s use [count](https:// to create multiple services and dynamic blocks to handle the nested schema for instances.

Some Locals
#

We need to deploy the service twice since we are working across two regions. Let’s define names for the service, the regions we are deploying for, instance configurations for when auto-scaling occurs, and some values for policies. I have a separate variables.tf file with var type: list(map(string)) that I have the instance names and serials defined in, so I’m just looping through those here.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
locals {

  // Need some names
  firewalls = ["us-forti", "eu-forti"]

  // Need some regions
  regions   = ["US-EAST-2", "GERMANYWESTCENTRAL-AZURE-1"]

  // Need some instances (for when auto-scaling occurs)
  instances = {
    for instance in var.instances :
    try("${instance.name}/${instance.serial}") => instance
  }
  
  // Define groups for policy
  from_groups = ["corp"]
  to_groups   = ["nonprod", "prod", "migration"]

  // Filter into separate sets
  from_grp_ids = [
    for v in data.alkira_group_connector.from_groups : v.id
  ]

  to_grp_ids = [
    for v in data.alkira_group_connector.to_groups : v.id
  ]

}

Fortinet Configuration
#

Since we already have segments and groups provisioned, we can reference them in our main.tf file. This configuration will provision a Fortinet service per region, connect back to our FortiManager instance on-premises, map our existing firewall zones to Alkira groups for multi-cloud segmentation, and auto-scale to handle load elastically.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
// We already have a segment, let's use it
data "alkira_segment" "business" {
  name = "business"
}

resource "alkira_service_fortinet" "service" {
  count = length(local.firewalls)

  // Provision for each region with FortiOS 7.0.3
  version                   = "7.0.3"
  name                      = local.firewalls[count.index]
  cxp                       = local.regions[count.index]

  // Configure auto-scaling + size
  min_instance_count        = 2
  max_instance_count        = 4
  auto_scale                = "ON"
  size                      = "LARGE"

  // Licensing + credentials
  license_type              = "PAY_AS_YOU_GO"
  management_server_ip      = var.mgmt_server
  credential_id             = alkira_credential_fortinet.auth.id

  // Segment + tunnel proto 
  management_server_segment = data.alkira_segment.business.name
  segment_ids               = [data.alkira_segment.business.id]
  tunnel_protocol           = "IPSEC"

  // Handle nested schema for instances
  dynamic "instances" {
    for_each = {
      for instance in local.instances : instance.name => instance
    }

    content {
      name = instances.value.name
      serial_number = instances.value.serial
      credential_id = alkira_credential_fortinet_instance.auth.id
    }

  }

  // Handle nested schema for segment_options
  dynamic "segment_options" {
    for_each = var.segment_options

    content {
      zone_name  = segment_options.value.zone_name
      segment_id = data.alkira_segment.business.id
      groups     = segment_options.value.groups
    }

  }
}

Policy Configuration
#

Now, we can selectively steer traffic to the FortiGates. Policy Resources can take a list of ids for source and destination groups. This is why I defined the to_groups and from_groups in local variables. We can simply loop through each set, return the IDs, and provide them as a single value to the policy resource.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
data "alkira_group_connector" "from_groups" {
  for_each = toset(var.from_groups)
  name     = each.value
}

data "alkira_group_connector" "to_groups" {
  for_each = toset(var.to_groups)
  name     = each.value
}

resource "alkira_policy" "policy" {
  name         = var.name
  enabled      = var.enabled
  from_groups  = local.from_grp_ids
  to_groups    = local.to_grp_ids
  segment_ids  = [data.alkira_segment.segment.id]
  rule_list_id = data.alkira_policy_rule_list.forti.id
}

Validation
#

Let’s validate our policies to make sure they are meeting requirements. Alkira’s ability to visualize policy comes in handy when reviewing multi-cloud policy, traffic flows, and network security more broadly. Having the ability to integrate this into the DevOps Toolchain makes for a great experience.

Validate Policy
Validation

Conclusion
#

Ever heard of that single product that solved all your organization’s network and security problems? Me neither! Valuable integrations like this one solve real problems. With the explosion of cloud services, it pays to zoom out and think about the long game. Building a strategy for your organization to support the legacy applications you can’t migrate, the applications on deck for migration to the cloud, and greenfield applications across multiple clouds, is a winning strategy. Alkira and Fortinet are two products that can help shift your focus to outcomes.

Related

Getting Started With Alkira And Terraform (Part 4)

In Part 1, we laid out our foundation. In Part 2 and Part 3 we connected various networks (both cloud and on-premises) and provisioned NGFWs that scale to real-time capacity. By default, networks connected to our corporate segment have full-mesh connectivity to each other. Let’s build some policies in code that can work with the groups we created to produce logical micro-segmentation that mirror a few real-world use cases.

Getting Started With Alkira And Terraform - (Part 1)

HashiCorp’s Terraform needs no introduction. It is all but the de facto vehicle for delivering cloud infrastructure, and for a good reason. What Terraform did for Multi-Cloud Infrastructure as Code, is precisely what Alkira does for the network. What happens when you use these two platforms together to deliver networking in and across clouds? If providing network services in code faster than ever before sounds interesting, this multi-part series is for you. Need a quick primer on Alkira? You can read up here.

Getting Started With Alkira And Terraform (Part 2)

In Part 1, we started with a scalable foundation that can adapt over time as the business grows and adjusts to changing markets. With Alkira’s Network Cloud, we take a cloud native approach in enabling our customer’s transformation. No appliances need to be provisioned in remote VPCs or VNets, and no agents need to be installed on workloads. Getting started is as easy as kicking off a build pipeline. For Part 2, let’s connect some networks from AWS, Azure, and GCP.