[ad_1]
NGINX, an open source, high-performance HTTP server, reverse proxy, and IMAP/POP3 proxy server, has gained popularity as a load balancer. I caught up with Sarah Novotny, head of Developer Relations at NGINX, ahead of her All Things Open session later this month, and asked her to explain NGINX’s growing popularity.
“NGINX provides a software-based application delivery platform that load balances HTTP and TCP applications at a fraction of the cost of hardware solutions,” she says. “This allows organizations to maximize the availability and reliability of their sites and applications, and minimize disappointed customers and lost revenue.”
In addition to being free, scalable, and easier to maintain, the key reason many organizations want an open source load balancer is that it provides a more flexible development environment, which helps organizations adopt a more agile development process. Sarah says that when compared with other options, NGINX offers huge performance improvements.
“With NGINX, organizations can deliver applications reliably and without fear of instance, VM, or hardware failure,” she says. “This is crucial as websites and applications make their way into our everyday lives.”
In the typical setup in most organizations, web server and ADC (application delivery components, often hardware) are separate components. But when it comes to web application delivery, NGINX is changing that approach. NGINX has combined these two elements, which provides performance and scalability at both ADC and web layers for web application delivery.
NGINX can be deployed on your hardware of choice, sized, and tuned for specific workloads, and it provides optimization flexibility for workload requirements in any physical, virtual, or cloud environments. IT environments are quickly changing, and more organizations are adopting DevOps and microservices tools, Sarah explains.
“NGINX continues to innovate, having just announced nginScript using a known language, JavaScript, to extend the software’s capabilities at the edge of network,” she says. “It’s also easy to automate deployment and configuration with tools like Puppet and Chef, so time-consuming maintenance work can be avoided.”
According to Sarah, NGINX is being used in a number of different scenarios, from handling all the load balancing duties to sitting behind a legacy hardware-based load balancer. “This makes it easier for organizations to install a private cloud or move to a hybrid cloud-based solution into their existing environment without needing to rip and replace what they have,” she adds.
Sarah says that the simplest deployment scenario is where NGINX handles all the load balancing duties.
“NGINX might be the first load balancer in the environment or it might be replacing a legacy hardware-based load balancer. Clients connect directly to NGINX, which then acts as a reverse proxy, load balancing requests to pools of back-end servers,” she says.
She explains that this scenario has the benefit of simplicity with just one platform to manage, and can be the end result of a migration process that starts with another deployment scenario. For example, another scenario they are seeing is NGINX introduced to load balance new applications in an environment where a legacy hardware appliance continues to load balance existing applications.
“NGINX and the hardware-based load balancer are not connected. Clients connect directly to NGINX, which can offload SSL, cache static and dynamic content, and perform other advanced ADC functions,” she explains.
According to Sarah, the usual reason for deploying NGINX in this way is that a company wants to move to a more modern software-based platform, but does not want to rip and replace all of its legacy hardware load balancers.
“By putting all new applications behind NGINX, an enterprise can start to implement a software-based platform, and then over time migrate the legacy applications from the hardware load balancer to NGINX,” she says.
Sarah’s team is seeing scenarios where NGINX sits behind a legacy hardware-based load balancer.
“Here clients connect to the hardware-based load balancer, which accepts client requests and load balances them to a pool of NGINX instances and load balance them across the group of actual back-end servers,” she explains.
Sarah says this scenario is most often used because of corporate structure.
“In a multi-tenant environment where many internal application teams share a device or set of devices, the hardware load balancers are often owned and managed by the network team, but need to be accessed by several teams,” she says. Because one team might make configuration changes that negatively impact other teams, a solution is to deploy a set of smaller load balancers, such as NGINX, so that each application team can have its own and make changes without requesting permission.
What is special about load balancing in the cloud?
As with other industries, tech organizations are moving from hardware and toward cloud-based infrastructures because they are less expensive, more scalable, easier to maintain, and provide a more flexible development environment, Sarah explains.
“Software-based solutions also provide added agility, which allows application teams to complete faster development cycles and, in turn, concentrate on making better features and keeping up with demand,” she says.
She notes that many legacy hardware-based solutions cannot provide the level of performance expected out of today’s sites and applications.
“Hardware is not only expensive and time consuming to deploy, it can be rigid and limiting in terms of management and the addition of new components,” she says.
Sarah explains that before NGINX, organizations typically used separate components for the web server and the application delivery controller (ADC) or reverse proxy load balancer, and the load balancing tool was usually a hardware component. “At NGINX, we are changing this approach by combining the two elements into a single software-based tool for web delivery that offers performance and scalability across all layers,” she says.
Advantages that come with a fully cloud-native solution like NGINX include rapid deployment flexibility, she explains.
“Unlike legacy hardware ADCs, software ADCs are built natively to deploy anywhere,” Sarah says. “They fit easily into cloud and virtual environments, with open APIs that enable integration with a number of other tools.”
She says that hardware places artificial limits on load balancing that can be a nightmare to reset, whereas NGINX software will run as fast as you let it, whenever you want, and in any environment.
“Additionally, DevOps and microservices are changing the development process not only by allowing applications to be delivered with greater performance, but they can also now be developed to perform better from the start,” Sarah adds. “Because it offers simple software configuration without high costs or complexity, application teams can use NGINX throughout the entire development process, and delivery can be tackled as part of the development cycle.”
All Things Open
Speaker Interview
This article is part of the All Things Open Speaker Interview series. All Things Open is a conference exploring open source, open tech, and the open web in the enterprise.
[ad_2]
Source link