Tag: Emerging

OpenStack Use Cases

OpenStack Use Cases

Prior to moving into the role of Technical Marketing Manager for Red Hat’s Partner Solutions Initiative, I was responsible for selling the Red Hat Cloud Portfolio for Mid Market North America. I was the sole Architect covering the entire country.  I was the first in the company with this title covering this market segment. People thought I was crazy and it was crazy that we would put focus into this market but I over achieved what was set in front of me.

Pubic cloud is just so easy and cheap, how could anyone selling “cloud” survive in this environment? Public cloud is as simple as swiping a credit card and you have a VM, but there are some use case that public cloud providers can’t solve. Below I tried to remove the industry focus terminology, because OpenStack is more than Telco and HPC. This is  how I would pitch the advantage of Private Cloud to a CxO.

According to the latest figures from IDC, outlined by Network World [1], server shipments increased 20.7% YOY in quarter 1 of 2018. Revenue rose 38.6%. Let’s see if these trends continue. That’s an amazing growth rate for such a large industry.

The last thing I want to touch on from a personal note; I don’t think the kubernetes on OpenStack focus the OpenStack Foundation has held is as helpful or profitable for them as a whole. Whenever I see them demoed together kubernetes is doing all the interesting bits and OpenStack is just the pipes carrying the water. I can perform the same kubernetes demo swapping out OpenStack for public cloud or virtualization for that matter. Kubernetes could be a larger part of the demo and discussion of it’s aaS (as a Service) projects it uses, but Kubernetes steals the limelight. My $0.02.


USE CASES:

Self-Managed CapEx Cloud

I have frequently run into customers that have business process problems with OpEx expenditures. Employee salaries are out of OpEx budget. If they go with AWS, they may literally be putting people out of a job or losing additional head count. I’ve encountered a customer that based their customer payment rates based on OpEx expenditures. This meant that they could spend $10,000 in CapEx and not bat an eye, but if they incurred $100 more of OpEx a month, that would ultimately make their the bills for their customers larger.

A typical customer buys cloud services by the drink, OpenStack along with a managed hosting partner or OEM allows that same customer to buy it by the pitcher. Make the cost a predictable CapEx expense. You choose how utilized you want your environment to be and have a defined cost for adding new resources to the environment. Help CIO and CFO sleep better at night knowing what the cost will be and not tossing and turning because their AWS bill looks like a phone number.

My opinion is based on the sheer number of companies doing managed OpenStack and Managed Red Hat OpenStack. Rackspace, IBM Bluemix, and Cisco Metacloud are hosting Red Hat OpenStack as a service. Dell, HPE, Super Micro, Cisco, NEC and Lenovo all have reference architectures for Red Hat OpenStack and may of them will do the racking, installing and configuring of Red Hat OpenStack before wheeling into your data center.

API Integrated Infrastructure

Customers are using external applications to provision both infrastructure and application, so APIs become the integration points. Common examples of this is using a IT service manager, a CI pipeline, or an orchestration tool as part of the application pipeline. A typical consumer of cloud services wants to provision compute, networking and storage all using an API, allowing them to use a familiar development language and tools. You can choose Ansible, heat templates, puppet modules or a number of other tools but you must choose something to scale your Development and Operations team. OpenStack offers an API-first approach to all the components, which virtualization like RHV or VMWare doesn’t. RHV offers only python bindings that don’t seem complete. For example, I don’t know if you could provision a new lun on your NetApp connected to RHV, via the API.

Converged Datacenter

This comes back to everything being API driven. The customer can use OpenStack GUI, CLI or Restful API to provision resources. We also allow for Hyper-convergence of Compute and Storage on the same box. Cinder and Swift have connectors into so many different storage systems it becomes really convenient to manage resources in multiple ways and multiple boxes, but though a common tool. You don’t need to rely on expensive SANs and proprietary storage systems.

IaaS provider / Managed Service Provider (MSP)

If we have a customer managing infrastructure for their customers, OpenStack is a good model for this, because of the integration capabilities. This was a popular use case several years ago and many start ups and smaller integrator thought they could compete with the likes of Amazon or Azure. *Cough* Rackspace *Cough* Mirantis *Cough*. I think this is possible with OpenStack, but I think the investment required to start that business is cost prohibitive, who wants to spend Billions of dollars a year to find out? Didn’t think so.

Thinking on a smaller scale with more reasonable expectations, I think this use case has a lot of time left in them. I was working with a company sending satellites into space. In 2018 they intended to send 50 missions into space, each getting their own multi-purpose cloud computing platform to support any application driven workload the future throws at them. I was working with a company that is using OpenStack to deliver their own Software as a Service to internal and external customers. I have many more use case that are actually running production/workloads. This is what OpenStack was originally designed to do.

 

NFV as a Service

This is our sweet spot within Red Hat OpenStack. We have many telcos coming to us to do advanced NFV functions at bare metal speeds. However, I don’t think this is just a carrier use case. If you are running a hybrid cloud, having a platform that supports your specific NFV implementation in the public and private clouds opens the decision making process and fits well into a cloud exit plan.

Asynchronous Scaling

Since everything is API driven and with the release of OSP 10, we now have composeable roles. We can scale any service (like networking) without having to scale other resources (Identity). RHV-M is a “fat” application and can’t be broken out to scale independent parts.
I hope this helps give you some ideas and that OpenStack isn’t “dead”. I’m interested to hear what you think.

Resources

[1] https://www.networkworld.com/article/3278269/data-center/suddenly-the-server-market-is-hot-again.html

OpenShift Serverless

OpenShift Serverless

At Red Hat Summit 2018, there was a lot of focus on Serverless of all sorts. I major new player, because S2I has existed as long as OpenShift, is Function as a Service on OpenShift. I’ve played with several , but we have settled on Apache OpenWhisk.

I can say most of my customers are on the public cloud, majority of them on AWS. Every one of my customers on public cloud are talking to me about Lambda. The key take away is that everyone is talking about, and not fully realizing the implications. If you commit into Lambda it’s committing deeply in AWS. The more functions you create the harder it is, even if the code is portable, to rebuild everything somewhere else. I’ve drawn an analogy that seems to resonate with me at least, “writing lambda functions is liking writing vendor specific hardware kernel integrations back in the 90s.”

The decision to change the platform becomes a much hard and longer choice, if you are not using Open platforms and products. That’s been the value of Open Source Platforms for 25 years.

 

Serverless on OpenShift Interactive Learning Platform
https://learn.openshift.com/serverless

 

Chris Wright at Red Hat Summit 2018: Emerging technology and innovation

 

See FaaS in action along with the power of your portfolio of Middleware Applications running on multiple cloud. Make your code and data portable, make it Red Hat.

Functions-as-a-Service with OpenWhisk and Red Hat OpenShift

 

Container Linux and RHEL: The road ahead

This is from the Red Hat Summit 2018 series.

I’m not the person to give you the best updated road map for RHEL and especially Container Linux. I’ve been focused in the multi-cloud Management and automation world for the past few years. A lot of developments have been afoot. Here are some amazing highlights with my commentary. I strongly suggest that you check it out the video below.

Immutable Foundation for OpenShift

The new Red Hat CoreOS (RHCOS) focus on several key areas. Red Hat Enterprise Linux and RHCOS are designed for different use cases and it’s important not to mix them. RHCOS will keep focused on one major thing… OpenShift/Kubernetes. RHCOS will be on the same cadence as Red Hat OpenShift Container Platform (RHOCP). This means when Kubenetes and OpenShift release, RHCOS will release. This allows engineering to keep the same speed and make changes necessary for new release of RHOSP. This also means that when we sunset specific version of RHOSP, it will also sunset the locked in version of RHCOS. Now we can keep the release coupled and supportable without having to drag RHEL before it’s ready.

There are so many more changes, but I don’t just want to copy slides. Hopefully you are intrigued enough to watch the video. Thanks for reading!

OpenShift/Kubernetes Logging Overview

OpenShift/Kubernetes Logging Overview

During a meeting it was brought up to me that the OpenShift/Kubernetes logging strategy  isn’t very concise. Though looking into this I wanted to put some context around the technology. “How does OpenShift capture logs?” “What is captured and logged?” “What is my recommendations for using the logging system?”

EFK Stack

EFK stands for Elastic Search (E), Fluentd (F),  and Kibana (K). This is a modification on the traditional ELK stack that has become popular in recent years for log aggregation, collection and sorting. Kibana acts as the user interface for the collected logs. Elastic Search is the search and analytics engine. Fluentd is a unified logging system with hundreds (500+ as of the time if this writing [1]) of plugins.

What is captured

Looking thought the Kubernetes documentation, it’s made a bit more clear what is captured where and how applications logs are managed from the container level. In the section titled ‘Logging at the node level’ [2]  it is explained that “Everything a containerized application writes to stdout and stderr is handled and redirected somewhere by a container engine. For example, the Docker container engine redirects those two streams to a logging driver, which is configured in Kubernetes to write to a file in json format.” It is said in OpenShift documentation “Fluentd reads from /var/log/messages and /var/log/containers/.log for system logs and container logs, respectively. You can instead use the systemd journal as the log source. There are three deployer configuration parameters available in the deployer ConfigMap.” [3]. For additional information and resources on Fluentd, I strongly recommend watching the ‘OpenShift Commons’ videos from May 17, 2017 [4].

Cluster wide -vs- Project logging

This is not a simple question to answer. I’m drawing upon my experience working with other clustered technologies and customers that have implemented OpenShift. My recommendation is to do what’s right for your environment. I know… not very useful. Hopefully my heuristics will lead you to your answer.

Business Reasons

More often than not your company, organization, group and team have their structure. I’ve worked with companies and agencies that have had every sort of organically grown business structure. Some extremely independent, some centralized, and some ignorant to the structure entirely. We have to consider how you do business today and what will actually work and how we can fit into that system. What are the security requirements? What are the data retention rates? What is the disaster recovery strategy?

Technical Reasons

If I were to propose that every project have their own EFK stack to manage only their logs. A customer running 100+ project will have a LOT of redundancy and the overhead for a security team to manage and track logs could be prohibitory expensive/complicated. How does a security team monitor the creation of new projects, validated their access and ultimate ensure the security and compliance of the systems?

If I proposed one giant company-wide EFK stack it would lighten the burden for some but could cause data management and growth complications. Our security team is happy, because they have one log on to one server to see all the system and application logs being generated by the containers and applications.  Let me assume for a minute a non-common use case for OpenShift, batch processing. I want to use this platform to ETL function on a file I have stored out in S3. That project or job that live and die on my whim, might introduce long stale data into my logging system and tool chain. The point of a job is to run and be gone, so I might not care about the details.

While working as a US Army consultant in 2011, we were implementing Splunk. Working though the data ingest rates and figuring out what was good and stale data was complicated and we had fairly static workloads. Working though all the requirements will likely guide you the right direction. I suggest pruning what is important and measuring them often, high signal to noise ratio. This typically means smaller units or project based logging. It becomes quite daunting to measure every job, application and container in your environment on an ongoing basis. Off load that responsibility to the application and project owners.

Since I mentioned Splunk, I thought it is important to include the following section as well. ‘Configuring Fluentd to Send Logs to an External Log Aggregator’. You can configure Fluentd to send a copy of its logs to an external log aggregator, and not the default Elasticsearch, using the secure-forward plug-in. From there, you can further process log records after the locally hosted Fluentd has processed them [3].

REFERENCES

Fluentd Plug ins list
[1] https://www.fluentd.org/plugins/all

Logging at the node level
[2] https://kubernetes.io/docs/concepts/cluster-administration/logging/#logging-at-the-node-level

Aggregate Logging – OpenShift Docs
[3] https://docs.openshift.com/container-platform/3.3/install_config/aggregate_logging.html

Fluentd – OpenShift Commons Briefing
[4] https://blog.openshift.com/openshift-commons-briefing-72-cloud-native-logging-fluentd/

Intro to CloudForms Tags

This blog post was originally posted by myself on Blogger (8/26/2016).

CloudForms Intro

Red Hat CloudForms offers unified management for hybrid environments, providing a consistent experience and functionality across container ­based infrastructures, virtualization, private and public cloud platforms. Red Hat CloudForms enables enterprises to accelerate service delivery by providing self service, including complete operational and lifecycle management of the deployed services. It provides greater operational visibility through continuous discovery, monitoring and deep inspection of managed resources. And it ensures compliance and governance via automated policy enforcement and remediation. All the while, CloudForms is reducing operational costs, reducing or eliminating the manual processes that burden IT staff.

For more information visit http://redhat.com/cloudforms

Tags

I think tags are one of the most important features of Red Hat CloudForms. CloudForms ability to tag resources for later use in reporting, chargeback/showback and automation is critical for getting more in-depth knowledge and generating laser focused reports that provide value.

In this article I am going to touch on general guidelines I use when building a tag schema. I believe there are two rules when talking about CloudForms tags. It’s better too over tag your resources than under tag. If you can measure it; you can manage and monitor it. Just like any data structure, a well thought out schema will save you a lot of work.

Tag Schema Recommendations


Business Tags:

The most important thing about your business tag schema is they make sense to you and your companies. The examples I list below are a very rough estimation of what your business will look like and how it will operate. Think about logically grouped business resources and come up with a tag for them.

Business Unit
– Sales (North America)
– Engineering
– QA
– Marketing
– IT Development
– IT Operations
Business Project
– oVirt
– OpenStack
– Project Phoenix
Business Owner
– VP – Linux Torvalds
– Project Owner – Richard Stallman
– Manager of Marketing (East) – Doris Hopper 

IT Operations Tags

If you’re reading this blog post, these tags probably matter most to you. Remember, measure what matters most to you. Help the business understand your value and what you do. I know it, you hopefully know it, let them know it too.

Infrastructure

– VMWare

  – Production Systems

  – Development Lab

– Solid State Hard drive (SSD)

– Dell Hardware

Software

– Database

  – PostgreSQL 5

  – Oracle DB 11G

– Web Server

– CRM

SLA

– Diamond

– Gold

– Silver

– Bronze

Site (Geographic)
– New York Datacenter
– Hong Kong Datacenter
– Zone A – Virginia DC
– Zone B – Virginia DC

 

Change Tags

Imagine if IT and the business came to an agreement on service windows based on what worked for each business unit. This can happen. Maybe you want to have test deployments on production resources. Tagging change windows into your resources will help with reporting and also automation.

Patch Window

– Patch Window A (Second Tuesday of the month)
– Patch Window B (Last Sunday of the month)
– Canary Deployment Environment

SLA

A service level agreement (SLA) is a contract between a service provider (either internal or external) and the end user that defines the level of service expected from the service provider.

– Diamond

– Gold

– Silver

– Bronze

Security Tags

Security tagging is something I’m still working through. I know there is value in creating a Security Role, Group and Users for Dashboard and Reporting. I’m looking at linking this into Policies and exporting log events to a SIM (Security Information Management).

– Information Assurance Security Team Check

  – IA Validated
– IA Not Checked

– Service Catalog Provisioned (Provisioned Machines Certified Gold Master)

– Satellite Verified

Call to Action
I would love to find out what you are doing with your tagging schema. Contact me so we can discuss.