Photo source: Christopher Gower Unsplash

Omni-channel, Cloud, Open Source, Microservices, Security, Scalability, Agility – these are just some of the concerns facing technology teams as they work to quickly deliver customer focused digital solutions.

At Marlo, we have seen organisations spin their wheels while designing and building the infrastructure and delivery capability to operate in a digital environment. In response, we have tapped into our combined experience to produce the Marlo Digital Enablement Platform [MDEP]. MDEP is an opinionated and extensible platform that has been designed around the following principles:

  • Combine the best open-source, SaaS and cloud-native services
  • Containerised workloads are the unit of deployment
  • Managed Kubernetes is the runtime environment
  • APIs/messaging are the standard model of external interaction
  • The platform is cloud agnostic
  • Security is designed in from the ground up
  • Delivery pipelines are fully automated
  • Platform provisioning and upgrades are zero-outage

That’s nice, but what do I do with it?

Much as it’s fun to kick off a CI/CD pipeline and see a new production-ready cloud platform spring into life in less that an hour, we knew that we had to show how this platform can reduce the workload on teams including developers, testers, and DevOps.

To do this, we have set about building two technology demonstrators that cover business domains that we are heavily involved in. Even if you don’t work in banking or government, they will still show how the platform accelerates delivery.

Our demonstration applications

The Open Banking demonstration provides both web and mobile interfaces allowing users to logon and interact with typical banking features including account and transaction lookups, changing personal details and making payments. Core system functionality comes from a mix of a mock banking system and live calls to public Open Banking APIs.

The Victorian Government Planning demonstration simulates providing access to VicPlan information for a citizen wishing to find details of a property including the local government area and planning scheme overlays. This demonstration retrieves details from public APIs on the Internet.

Each application showcases technology features that are critical to providing modern real-world applications:

Microservices managed as a mesh. A microservice is a small, business-oriented software component that takes exclusive responsibility for an individual domain. This architecture helps teams manage scale and the need for rapid change. The platform automatically deploys microservices into the open source Istio service mesh which abstracts API traffic management concerns such as discovery and security away from developers, as well as providing common resilience patterns including retries, and circuit breakers.

APIs and Integration. Microservice logic as well as core systems and external interfaces are abstracted behind well structured REST and RPC APIs. This provides quick adoption by multiple user channels such as the web and mobile interfaces implemented in the demonstrations.

Containerised deployment onto the Cloud. By packaging into containers and deploying onto public cloud infrastructure, MDEP leverages the enormous scalability and resilience that can be provided by the major cloud providers. Deployable units are Docker images which allows them to be distributed across Kubernetes clusters.

On demand provisioning of supporting components. The build pipelines have been designed to readily provision extension components such as databases and caching in support of the business logic.

Security. MDEP has been designed to be secure from its inception. Many security features including inter-service communication, network zoning, and policy enforcement via an API gateway and service mesh are provisioned by default using the CI/CD pipelines that build both the platform instances and deploy applications. The Open Banking application demonstrates the integration of an external identity provider to provide OAuth 2.0 and multi-factor authentication.

DevOps pipeline automation. The MDEP platform and agile development practices are aligned with modern DevOps practices. Changes to platforms are only permitted via the CI/CD pipelines, ensuring that all infrastructure and code is managed under Source Control Management and CI/CD processes.

What’s a Digital Enablement Platform?

Digital delivery requires speed and a focus on customer experience rather than technology. To enable this, a digital platform needs to remove as many technology concerns as possible. Marlo’s platform provides an opinionated and automated default configuration for the entire end-to-end lifecycle of digital development. To achieve this it leverages what we believe to be current best-practice tools and services including:

  • Deployment onto any of the major cloud providers
  • Use of cloud-native and open source components to encourage scaling cost to zero for unused components
  • Full automation via CI/CD pipelines using a combination of GitLab, Red Hat Ansible, and Hashicorp Terraform
  • Docker, Kubernetes and Istio for workload management

What do build teams get from the platform?

Product Owners avoid a lengthy planning, architecture and procurement ramp-up period by using an opinionated platform based on our experience and best practice.

Architects avoid license driven architectures and product lock-in by using cloud-native, SaaS, and open source components.

Designers and Developers focus on business logic while using development standards including SCM, naming standards, monitoring & logging, automated code defect scanning, and API documentation.

Testers benefit from the Karate test automation framework that is embedded into the CI/CD pipelines, tests are written using the Behaviour Driven Development (BDD) syntax. The Selenium framework provides UI testing. Together they provide full coverage of different testing types including functional, UI and performance.

DevOps teams are provided with automated and zero-outage deployments, the ability to quickly provision new platform instances, source and artefact management, and a simple mechanism to provide supporting components such as databases.

Support teams can readily visualise the state of both the platform instances and the microservices running on them. The open source Kiali service mesh management console, and cloud platform services such as AWS CloudWatch are utilised to ensure each platform is easy to operate.

Can I see this for myself?

If you are starting your digital journey or if your current technology practises are delivering too slowly then Marlo would be happy to demonstrate and discuss how MDEP can address your specific needs. Using automation, we can show a new secure and scalable platform instance being created in real-time during our discussions.

Kong HQ

For our November Tech Forum, Vikas Vijendra from Kong visited our Melbourne office to bring us up to speed on what’s happening at KongHQ.

At Marlo we are already familiar with the Open Source Kong API Gateway and we like how it fits into our own digital enablement platform. Kong, however, are making a bold shift in product direction with the announcement of their Service Control Platform. They understand that while we might be focused on RESTful APIs today, the future will also include protocols such as gRPC, GraphQL and Kafka. Moreover, the advent of Kubernetes as the container platform of choice means Kong needs to extend into the cluster itself to provide full lifecycle service management.

The main features of the Service Control Platform are:

  • A centralized control plane to design, test, monitor and manage services
  • Multiple Runtimes – not just the nginx engine of Kong but also Istio, Kuma, Apollo and serverless
  • Multiple Protocols – REST, gRPC, GraphQL and Kafka
  • Multiple Platforms – All major cloud providers plus any Kubernetes

The open source API Gateway offering will remain with most of the new features available in the Kong Enterprise offering. These include:

  • Kong for Kubernetes (K4K8S): a supported version of the Kong Ingress Gateway for Kubernetes along with all enterprise plugins
  • Kong Studio: for designing, mocking and testing APIs
  • Kong Manager: for the runtime monitoring and management of deployed services.
  • Kong Developer Portal: a self-service portal providing access to the service catalog.

All of the above features are available as a SaaS offering (Kong Cloud) or on-premise, or any combination of the two.

Perhaps most interesting is the announcement of the Kuma service mesh. An Ingress Controller alone, is limited to managing traffic entering a cluster (north-south traffic). In a microservices architecture most of the traffic is between services on the same cluster (east-west traffic). A service mesh allows control of traffic between these services.

Of course Istio is the dominant product in the service mesh space but Kong (and others) believe Istio has become too complex and Kuma provides a more appropriate level of functionality. The functionality of the Ingress Gateway and the service mesh will eventually morph into a single product controlling both north-south and east-west traffic.

Tech Lead Vishal Raizada recently conducted a very informative Tech Forum at the Marlo Office. He presented on Istio: Architecture, Application and Ease of Implementation.

Our tech forum presentation is downloadable here and showcases an example of Istio’s implementation, application and benefits.

Istio is now a key part of the Marlo Digital Enablement Platform – our open source, cloud-native platform which provides a complete on-demand environment for digital delivery.

The enterprise application landscape has changed a lot in the last decade: from managing on premise servers to using infrastructure as a service; from monolithic applications to building microservices.

The new world offers many benefits but it also introduces new challenges. With the distributed nature of the application landscape, service discovery and general application composition becomes extremely complex. Controls, such as traffic management, security and observability, which could previously be managed in one place now become a scattered problem.

Enter Istio, a service mesh framework, which wraps around a cloud native architecture and adds a layer of abstraction to manage these complexities. It enables a truly automated delivery process, where a development team can purely focus on code, and Istio handles the rest, including service discovery, security, circuit breaking and much more. In addition, it is programmable, hence it can be incorporated as part of the DevOps & DevSecOps process with ease. A service mesh gives control back to the enterprise application world without taking away any of the benefits.

Read Vish’s full presentation here.

Cutting Environment Costs In The Digital Age

If you’re a CIO, or an infrastructure manager, then you’ve probably got a mandate from the CFO or the CEO to cut costs. And you’re running a complex set of applications, across multiple environments – at least 3 (production, test and dev). Depending on how mature your infrastructure team is, you might already be running 5 or 6 environments, or even more.

But how many environments do you really need?

Multiple dev and test environments are needed to deal with different projects and agile teams delivering at different cadences, all wanting their own separate dev and test environments. You’re probably operating in multiple data centres and have to worry about multiple cloud providers and SaaS vendors.

If money was no object, you’d be scaling to 20 or 30 environments because that’s what your delivery teams are telling you that they need. Costs aren’t going down in line with your cost-cutting mandate, they’re going up.

So, here’s a radical thought: the number of environments that you actually need to look after is… 1. (And if you’re good, it might be none).

What Do You Actually Want, Anyway?

You want to do the things you need to be able to do and do them well. So, if you’re working for a brewing company, that means you need to ensure your company is good at making, selling and delivering beer.

But as the CIO, you’re in charge of the apps that enable all that good stuff. You want software that works, running in production, on kit that doesn’t fall over, at a reasonable cost. That’s about it.

If you didn’t have to worry about managing multiple non-production environments across the data centre and the cloud, and all the cost and complexity that comes with them, then we bet that, frankly, you’d give it all up tomorrow.

Getting to One

To see why you only need that one environment, and why you can get rid of all the rest, let’s think about how the development of 3 key technologies that have grown up over the last 10 years: Cloud, DevOps, and API’s and microservices.


The grand promise of cloud is that Cloud says infrastructure is available on demand. You can have any number of servers, at any scale, whenever you want them. As much as you like. Somewhere in Sydney, Tokyo, Stockholm, London, São Paolo or Mumbai is a data centre the size of a football field, and it’s yours for the taking. If you want a dozen 128-CPU boxes with over 3TB of RAM, several petabytes of storage and 25-gigabit networking, they’re all yours (as long as your credit card is working!) You can have this, literally in minutes, any time of day or night.


We can go one step further than that: DevOps says not only is infrastructure available on demand, but that it is code. You can automate the provisioning of infrastructure, and on top of that, automate the deployment of all your applications.

You can have software on demand, not just infrastructure. By extension you can construct an entire environment whenever you need it, wherever you need it – and again by extension, you can throw it away whenever you don’t need it.

API’s and Microservices

But that’s not going quite far enough. The API Gateway means you can securely compartmentalise your environments – by insisting that every interaction between systems is mediated through an API gateway, you build a standard interface mechanism that is network-agnostic – so it matters less which network your API’s (and the (micro)services they provide façades for) live on. Coupled with the ability – in non-production environments at least – to mock and stub API services, this vastly reduces the need to be managing and running monolithic environments that contain all your services at once.

If your infrastructure is available on demand, and infrastructure is code, and environments are compartmentalised by API Gateways, then anyone can bring a dev or test environment – you don’t need to care where it is. It doesn’t need to be in your data centre, and it doesn’t really need to be in your VPC either.

Which Environments Do You Actually Need?

Production, maybe, and then only because you’ve still got legacy applications that you haven’t yet hidden behind API’s. But give as much of that away as you can, as soon as you can, using the SaaS model as your template.

Wherever possible, you should outsource the problem of running dev environments to your vendors who do build and test. They should be doing it on their kit at their cost.

They’ll be super-efficient: there will be no dev environment running if they’re not actually doing dev right this minute, unless they enjoy the smell of burning (their own) money. There’s no point in you running dev environments any more. Platforms like Marlo’s Digital Enablement Platform [MDEP] provide for very rapid start environments where dev teams can be up and running, building business code, in a few hours, not days or weeks.

Furthermore, you should be making vendors run your testing environments for the applications that they’re delivering, and for the same reasons as dev. You still have to manage test data (and most organisations still have to solve for privacy, but they seem to manage that just fine when they implement Salesforce). And you’ll need to ensure that they make their environments available whenever end-to-end testing is going on.

What You’re Still Going To Have To Solve

  • Security provisioning and network access to any environments that you’re still running
  • Making sure that legacy applications have their own API’s (and API Gateways) in front of them, so they can be accessed safely by external developers
  • Vendor contracts that encourage the right behaviour when vendors run dev and test environments
  • Access to code (escrow arrangements)
  • Standards and guidelines for vendors delivering applications and services to you
  • Providing platforms like the Marlo Digital Enablement Platform [MDEP] to standardise and govern the way that your applications are built and deployed – mostly for non-functionals like security, monitoring, logging and auditing
  • Dependency management on a grand scale (but you already have this problem, and well-designed API’s help)


  • Make your vendors bring their own environments for digital delivery; embed requirements for how they should behave in contracts
  • Implement standards and guidelines for delivery – solve problems like containerisation, security, reliability, scalability, monitoring and logging etc in standard, cloud-native ways
  • Provide standardised platforms for hosting in production like MDEP, so that delivery can concentrate on business value
  • Engage with organisations like Marlo who truly understand the challenges of – and how to succeed in – today’s complex digital environments