A Quick Glossary Of Evolving Infrastructure Technologies
Posted September 14, 2023 by Sayers
Staying on top of evolving infrastructure technologies takes time. Probably more time than you’ve got to spare. Fortunately, Sayers has people to do that for you.
Sayers IT Infrastructure and Cloud Practice recently updated its Technology Maturity Cycle. Whether you want to be prepared for innovations still on the horizon, grow as a frontrunner with emerging technologies, or be more productive with established offerings, use these infrastructure glossary highlights to stay in the know.
Watch For These Coming Infrastructure Innovations
These early-stage technologies may be two to three years in the future or longer. Still, these infrastructure advancements are worth monitoring:
Cyberstorage. Gartner coined the cyberstorage category in 2022, referring to new storage and data protection technologies to defend unstructured data from ransomware and malware attacks. Cyberstorage solutions provide an additional layer of protection between the network infrastructure and the data storage system. According to Gartner, by 2028 100% of storage products will include cyberstorage active threat defense capabilities.
DNA Storage. DNA storage encodes data into DNA strands in cold storage. Because DNA’s density makes it possible to hold vast amounts of data in a tiny space, a single gram of DNA could hold 215 petabytes of data. You could see the first DNA data center operational in the next 5-10 years.
Immutable Infrastructure. Like immutable storage containers but at a server level, immutable infrastructure can’t be changed – resulting in increased reliability and security. Mark McCully, Senior Solutions Architect at Sayers, says:
“If a problem arises after you’ve introduced a new server into the infrastructure, you would have to break down the entire server and then spin it back up again.”
Quantum Computing. Still in research mode, universities and companies like IBM are testing quantum computing equipment that uses quantum bits (qubits) to perform complex calculations. Quantum computers can break through larger encryption keys in a much faster timeframe, but they also can make more robust encryption techniques possible.
Emerging Infrastructure Technologies Begin To Gain Traction
Several emerging technologies for infrastructure have gained attention for offering more flexibility, speed, and/or more efficient use of resources. Among them:
Generative AI. AI is generating a lot of industry buzz and companies are trying to determine how to leverage it to improve productivity and time-to-market. While there may be many benefits of using Generative AI in IT, data growth and security are two key risks to keep in mind as you architect a Generative AI solution. Another thing to consider is to initially leverage a consumption-based Large Language Modeling platform which allows companies to test out AI\ML initiatives without having to make large capital investments on compute and storage.
Bare Metal as a Service (BMaaS). Today more vendors are pitching BMaaS as companies look to transition to cloud or colocation data centers. Rather than offering preconfigured virtual servers in the cloud, BMaaS provides a dedicated physical infrastructure as a clean slate. This allows you to customize your server with the operating system and software you want.
Compute Express Link (CXL) – Memory Pooling. CXL memory pooling gives multiple host processors access to shared memory resources for more efficient use of memory. Stephen Johnson, Sayers Solutions Architect, says:
“Memory is the biggest bottleneck when it comes to performance on the server-related platform. Private cloud or public cloud will be able to utilize this technology to dynamically change the memory allocation for data-intensive workloads.”
Container Backups. Backing up data containers means capturing them at a specific moment. For business continuity and disaster recovery, you need to replicate containers in a production environment, which includes backing up the container images, the attached storage, and the persistent volumes. Specific solutions to back up the Kubernetes container environment also are available.
Digital Twins. Digital twins quickly and digitally represent your data environment for ransomware protection, disaster recovery, and compliance validation. The technology can create full-scale versions of your data centers in isolated networks, where you can test and ensure any threats are gone before making the systems available to end users. Johnson says:
“If you want to be able to recover really quickly from ransomware, this is going to be a good solution for you. This also has a huge DevSecOps opportunity in terms of pen testing and forensic analysis. If you get ransomware, you can start doing forensic testing on those digital twins right away and not infect your production environment.”
PCIe 5.0. A Peripheral Component Interconnect (PCI) bus connects the CPU with various peripherals. The PCI Express (PCIe) 4.0 interface standard raised the maximum potential bandwidth of a PCIe slot to 64 gigabytes per second. PCIe 5.0 will double that to 128 Gbps bandwidth, bringing higher-speed computing and more performance power to the data center. Expect to see PCIe 5.0 in Generation 12 servers, with more performance power between the CPU and the PCI bus.
Serverless Infrastructure. The infrastructure is “serverless” only because you don’t need to manage the servers directly. In this cloud-native development model, the cloud provider handles provisioning and upkeep so developers can focus on coding. Serverless infrastructure automatically scales with a high availability pay-as-you-use billing model, with resources allocated only when needed.
Software-Defined Data Center (SDDC). With SDDC the storage data facility virtualizes the whole environment including network storage, CPU, and security rather than relying on hardware configurations to manage specific tasks. This approach provides greater flexibility and efficiency in data center management, with the software allocating data center resources where needed.
Productive Infrastructure Technologies Offer Established Benefits
Unstructured Data. There are multiple factors that are causing an exponential growth in unstructured data such as AI\ML, regulated data retention requirements, full-stack observability platforms, and security logging. Gartner predicts that by 2028 organizations will triple their unstructured data capacity across on-premises, edge, and public cloud locations. Companies will need to determine where and how long to store unstructured data while balancing cost vs. performance. There are newer, next-generation storage platforms that can consolidate traditional file shares and unstructured data onto a single platform for ease of administration.
Anything as a Service (XaaS). Consumption-based as-a-service models can be a more cost-effective way to manage your infrastructure (IaaS) and platform (PaaS) needs. In a traditional as-a-service model, the provider owns the hardware and you run your applications on it. If you need increased storage capacity, the provider can add a new array.
In a non-traditional approach, some vendors host a web-based management tool to manage your on-premise hardware and call it an as-a-service model. Your DevOps team can log in to the web portal, create their storage volumes, and attach them to the servers they have access to.
Hybrid Cloud and Multicloud. Many organizations want the flexibility of using a combination of on-premise infrastructure and public cloud(s). Hybrid and Multicloud offer that flexibility. Wade Scheffner, Solutions Architect at Sayers, says:
“Some clients have environments on premise and in Microsoft Azure and now want to back it all up in AWS. That multicloud functionality is starting to be included in a lot of backup tools, so the customer can have their data wherever they want it. They’re not locked into any cloud services, or even locked into being on prem.”
Observability. Often talked about on the security side in terms of knowing what and where your assets are, observability applies to infrastructure as well. Knowing what you have, where it is, and what it’s doing provides the information you need for more efficient resource utilization and data backups. Next generation Observability platforms can also help organizations consolidate multiple monitoring tools into a single pane of glass to be more proactive in responding to alerts and improving mean-time-to-recovery of critical assets as well.
Observability Platforms provide real-time, full-stack monitoring of your Public Cloud assets, on-premise server and network infrastructure, and application performance. They also leverage AI\ML capabilities to help create an automated performance baseline of your environment and proactively detect and alert on anomalies with minimal false positives.
Questions? Contact us at Sayers today for guidance on the right technology solutions to help your business.