Tag: Technology

Supply Chain Attacks

Supply Chain Attacks

The News

Supply chain attacks are extremely common but not always quick to be detected. This morning I saw an article about the AUR (Arch User Repository), a community-driven repository created and managed by Arch Linux users, hosting malicious software. The software in question was an orphaned PDF Viewer called “acroread”. This software would collect mess with systemd, collect system information and exfiltrate that data [1]. For more information on that please read the hacker news article referenced.

 

The Supply Chain Problem

A supply chain is actually a complex and dynamic supply and demand network.
A supply chain is a system of organizations, people, activities, information, and resources involved in moving a product or service from supplier to customer. [5] A supply chain attack when when you exploit part of the process to change the end result.

I am guessing it’s because of the Gentoo [2] and Docker hub [3] attacks that everyone is looking at their community packages and we will see more of these reports coming out. With bitjacking there is a monetary incentive to hack a “trusted”, but in reality community, supply chain. This is the dark side of the open code movement.

Supply chain management and transparency is hard. Double that if you are a community supported entity that doesn’t make money or give guarantees. Do you spend time fixing bugs, adding features, reviewing pull requests, documenting old features better, document new features, demand/community generation, and so many more actions. When there is not a level of control but an implied trust… you are asking for problems. Ask yourself, would you run a binary from a random guy on the internet? If so, look inward.

I want to point that that this was discovered in the AUR (Arch User Repository), a community-driven repository created and managed by Arch Linux users, not official repositories like Arch Build System (ABS).

We think about supply chain management a lot at Red Hat. I would talk about this often when I was a Solution Architect. Red Hat, Fedora, Suse and Ubuntu deliver each have repositories that are controlled and backed by companies. This means they have control to the kingdom and you need to follow a specific process to get added. The AUR appears to have something similar but since it’s community based, they found a weakness in an orphaned package and use that as a jumping on point.

Don’t run some code a guy on the internet wrote… Get it from a trusted source. I’m looking at you snaps and juju… Unfortunately, scammers and state actors understand you want things ‘easy’. There is a contention between ease of use and security. Moving fast and moving safely. Over and over again we see systems get abused. Be mindful of your sources and hopefully Arch and Gentoo doesn’t take to much of a reputation hit from this, but hopefully they are more vigilant in the future.

 

Sources

[1] https://thehackernews.com/2018/07/arch-linux-aur-malware.html?m=1

[2] https://thehackernews.com/2018/06/gentoo-linux-github.html

[3] https://www.bleepingcomputer.com/news/security/17-backdoored-docker-images-removed-from-docker-hub/

[4] https://wiki.archlinux.org/index.php/Arch_User_Repository

[5] https://en.wikipedia.org/wiki/Supply_chain

OpenStack Use Cases

OpenStack Use Cases

Prior to moving into the role of Technical Marketing Manager for Red Hat’s Partner Solutions Initiative, I was responsible for selling the Red Hat Cloud Portfolio for Mid Market North America. I was the sole Architect covering the entire country.  I was the first in the company with this title covering this market segment. People thought I was crazy and it was crazy that we would put focus into this market but I over achieved what was set in front of me.

Pubic cloud is just so easy and cheap, how could anyone selling “cloud” survive in this environment? Public cloud is as simple as swiping a credit card and you have a VM, but there are some use case that public cloud providers can’t solve. Below I tried to remove the industry focus terminology, because OpenStack is more than Telco and HPC. This is  how I would pitch the advantage of Private Cloud to a CxO.

According to the latest figures from IDC, outlined by Network World [1], server shipments increased 20.7% YOY in quarter 1 of 2018. Revenue rose 38.6%. Let’s see if these trends continue. That’s an amazing growth rate for such a large industry.

The last thing I want to touch on from a personal note; I don’t think the kubernetes on OpenStack focus the OpenStack Foundation has held is as helpful or profitable for them as a whole. Whenever I see them demoed together kubernetes is doing all the interesting bits and OpenStack is just the pipes carrying the water. I can perform the same kubernetes demo swapping out OpenStack for public cloud or virtualization for that matter. Kubernetes could be a larger part of the demo and discussion of it’s aaS (as a Service) projects it uses, but Kubernetes steals the limelight. My $0.02.


USE CASES:

Self-Managed CapEx Cloud

I have frequently run into customers that have business process problems with OpEx expenditures. Employee salaries are out of OpEx budget. If they go with AWS, they may literally be putting people out of a job or losing additional head count. I’ve encountered a customer that based their customer payment rates based on OpEx expenditures. This meant that they could spend $10,000 in CapEx and not bat an eye, but if they incurred $100 more of OpEx a month, that would ultimately make their the bills for their customers larger.

A typical customer buys cloud services by the drink, OpenStack along with a managed hosting partner or OEM allows that same customer to buy it by the pitcher. Make the cost a predictable CapEx expense. You choose how utilized you want your environment to be and have a defined cost for adding new resources to the environment. Help CIO and CFO sleep better at night knowing what the cost will be and not tossing and turning because their AWS bill looks like a phone number.

My opinion is based on the sheer number of companies doing managed OpenStack and Managed Red Hat OpenStack. Rackspace, IBM Bluemix, and Cisco Metacloud are hosting Red Hat OpenStack as a service. Dell, HPE, Super Micro, Cisco, NEC and Lenovo all have reference architectures for Red Hat OpenStack and may of them will do the racking, installing and configuring of Red Hat OpenStack before wheeling into your data center.

API Integrated Infrastructure

Customers are using external applications to provision both infrastructure and application, so APIs become the integration points. Common examples of this is using a IT service manager, a CI pipeline, or an orchestration tool as part of the application pipeline. A typical consumer of cloud services wants to provision compute, networking and storage all using an API, allowing them to use a familiar development language and tools. You can choose Ansible, heat templates, puppet modules or a number of other tools but you must choose something to scale your Development and Operations team. OpenStack offers an API-first approach to all the components, which virtualization like RHV or VMWare doesn’t. RHV offers only python bindings that don’t seem complete. For example, I don’t know if you could provision a new lun on your NetApp connected to RHV, via the API.

Converged Datacenter

This comes back to everything being API driven. The customer can use OpenStack GUI, CLI or Restful API to provision resources. We also allow for Hyper-convergence of Compute and Storage on the same box. Cinder and Swift have connectors into so many different storage systems it becomes really convenient to manage resources in multiple ways and multiple boxes, but though a common tool. You don’t need to rely on expensive SANs and proprietary storage systems.

IaaS provider / Managed Service Provider (MSP)

If we have a customer managing infrastructure for their customers, OpenStack is a good model for this, because of the integration capabilities. This was a popular use case several years ago and many start ups and smaller integrator thought they could compete with the likes of Amazon or Azure. *Cough* Rackspace *Cough* Mirantis *Cough*. I think this is possible with OpenStack, but I think the investment required to start that business is cost prohibitive, who wants to spend Billions of dollars a year to find out? Didn’t think so.

Thinking on a smaller scale with more reasonable expectations, I think this use case has a lot of time left in them. I was working with a company sending satellites into space. In 2018 they intended to send 50 missions into space, each getting their own multi-purpose cloud computing platform to support any application driven workload the future throws at them. I was working with a company that is using OpenStack to deliver their own Software as a Service to internal and external customers. I have many more use case that are actually running production/workloads. This is what OpenStack was originally designed to do.

 

NFV as a Service

This is our sweet spot within Red Hat OpenStack. We have many telcos coming to us to do advanced NFV functions at bare metal speeds. However, I don’t think this is just a carrier use case. If you are running a hybrid cloud, having a platform that supports your specific NFV implementation in the public and private clouds opens the decision making process and fits well into a cloud exit plan.

Asynchronous Scaling

Since everything is API driven and with the release of OSP 10, we now have composeable roles. We can scale any service (like networking) without having to scale other resources (Identity). RHV-M is a “fat” application and can’t be broken out to scale independent parts.
I hope this helps give you some ideas and that OpenStack isn’t “dead”. I’m interested to hear what you think.

Resources

[1] https://www.networkworld.com/article/3278269/data-center/suddenly-the-server-market-is-hot-again.html

OpenStack Summit – 2018 – Vancouver

OpenStack Summit – 2018 – Vancouver

Summary

Attendance this year in Vancouver was way down, about 3,500, from Boston last year. The energy level of the keynotes, sessions and hallway were muted. The expo hall was also much smaller than I have ever seen in my past 3 years attending the NA Summits.

The general focus of the OpenStack Foundation is on inclusion and moving up the stack. They are seeing what developers and cloud native customers are talking about to show how it runs on OpenStack. It feels genuine they are trying to help, but with the attention being so much on Kubernetes, OpenStack capabilities kept getting upstaged.

I’ve been in the OpenStack orbit for about 6 years now, starting with Essex. I started as a consultant building OpenStack clouds for development. I’ve helped organizations move smaller private clouds to production. Most recently I was helping customers adopt OpenStack. I want to see OpenStack succeed. I would love to see more/any hybrid cloud solutions and integrations points with other environments. How great would it be for Horizon to connect into your AWS or Azure console? What if Horizon integrated with Kubernetes dashboard? Instead, OpenStack Foundation says, let’s talk about multiple container solutions…

The general shrinking of attendance I don’t see necessarily as a sign of the end. I think there are many factors that caused this to nose dive:

  1. Convention fatigue
    • May 2 – 4. Kubecon. Copenhagen, Denmark. Mass popularity in public and private cloud. International travel.
    • May 8 – 10. Red Hat Summit. San Francisco, CA. Broader Open Source and Hybrid Cloud Focus.
    • May 7 – 9. Microsoft Build. Seattle, WA. Broad industry topics and leading cloud provider.
    • May 9 – 17. Pycon. Cleveland, OH. Python being the primary language OpenStack is written in, and the largest Python event in North America.
    • May 8 – 10. Google I/O. Mountain View, CA.
    • May 21 – 24. OpenStack Summit. Vancouver, BC. The potential attendees and sponsors of this event in the month of May alone probably wanted to attend multiple of the above events. The overlap is huge. Which would you rather attend, have a booth or present?
  2. Twice a year. There comes a time when your project isn’t the most innovative and changing beast in the enterprise, this is a good thing. Does OpenStack still need to release two versions a year? Does it need to have two conventions in a year? I’m not one to say but I’d love to hear what you think…
  3. International Travel. I live in the United States and traveling to Canada is not much of a problem, but it can be for some. If your company is not world wide or have a presence in Canada I have found there is a LOT of hesitation about sending your employees to any other country. Each company is going to put a different weight to this, but it is part of the equation. Also, Canada requires US Citizens to have a Passport, and that’s surprisingly uncommon for people to have in IT. You can’t just go to Canada on a whim and the Passport process can take 6 – 8 weeks or more.
  4. Vancouver, again. This is another minor point but something they should consider. Should we go somewhere new or go to a classic spot? It seems like in NA they are hitting the old haunts. Maybe, you should go to places where it’s not “Just another convention” like Chicago or Detroit. With it being centrally located it allows people coming from all over to fly for 2 – 3 hours instead of 5 – 8 hours. I got to San Francisco and Las Vegas to many times a year for conventions. Let’s try somewhere different.

I want to see OpenStack continue to succeed. They’ve made some missteps in the past trying to do new things. Kubernetes is cashing the immutable application check OpenStack wrote 6 years ago. The product is maturing and the hype is drying. However, I hope I’ve made clear that I think it’s not at all dead. It’s in an important phase of it’s life. Middle aged. Accept it. A fancy new convertible and black hair dye won’t change the fact you’ve grown up. That’s a good thing.

 

Videos to watch

Business Focus

Is the public cloud really eating OpenStack’s lunch?

OpenStack for AWS Architects

 

 

Technology Focus

OpenStack upgrade strategy the fast forward upgrade

 

Root your OpenStack on a solid foundation of leaf spine architecture

Integrating keystone with large scale centralized authentication

Leveraging serverless functions in heat to deliver anything as a service-xaas

The illusion of infinite capacity part 2 – over subscription

Free software needs free tools

Workload optimized OpenStack made easy

Engineering container security

Intro to container security

Container Linux and RHEL: The road ahead

This is from the Red Hat Summit 2018 series.

I’m not the person to give you the best updated road map for RHEL and especially Container Linux. I’ve been focused in the multi-cloud Management and automation world for the past few years. A lot of developments have been afoot. Here are some amazing highlights with my commentary. I strongly suggest that you check it out the video below.

Immutable Foundation for OpenShift

The new Red Hat CoreOS (RHCOS) focus on several key areas. Red Hat Enterprise Linux and RHCOS are designed for different use cases and it’s important not to mix them. RHCOS will keep focused on one major thing… OpenShift/Kubernetes. RHCOS will be on the same cadence as Red Hat OpenShift Container Platform (RHOCP). This means when Kubenetes and OpenShift release, RHCOS will release. This allows engineering to keep the same speed and make changes necessary for new release of RHOSP. This also means that when we sunset specific version of RHOSP, it will also sunset the locked in version of RHCOS. Now we can keep the release coupled and supportable without having to drag RHEL before it’s ready.

There are so many more changes, but I don’t just want to copy slides. Hopefully you are intrigued enough to watch the video. Thanks for reading!

OpenShift/Kubernetes Logging Overview

OpenShift/Kubernetes Logging Overview

During a meeting it was brought up to me that the OpenShift/Kubernetes logging strategy  isn’t very concise. Though looking into this I wanted to put some context around the technology. “How does OpenShift capture logs?” “What is captured and logged?” “What is my recommendations for using the logging system?”

EFK Stack

EFK stands for Elastic Search (E), Fluentd (F),  and Kibana (K). This is a modification on the traditional ELK stack that has become popular in recent years for log aggregation, collection and sorting. Kibana acts as the user interface for the collected logs. Elastic Search is the search and analytics engine. Fluentd is a unified logging system with hundreds (500+ as of the time if this writing [1]) of plugins.

What is captured

Looking thought the Kubernetes documentation, it’s made a bit more clear what is captured where and how applications logs are managed from the container level. In the section titled ‘Logging at the node level’ [2]  it is explained that “Everything a containerized application writes to stdout and stderr is handled and redirected somewhere by a container engine. For example, the Docker container engine redirects those two streams to a logging driver, which is configured in Kubernetes to write to a file in json format.” It is said in OpenShift documentation “Fluentd reads from /var/log/messages and /var/log/containers/.log for system logs and container logs, respectively. You can instead use the systemd journal as the log source. There are three deployer configuration parameters available in the deployer ConfigMap.” [3]. For additional information and resources on Fluentd, I strongly recommend watching the ‘OpenShift Commons’ videos from May 17, 2017 [4].

Cluster wide -vs- Project logging

This is not a simple question to answer. I’m drawing upon my experience working with other clustered technologies and customers that have implemented OpenShift. My recommendation is to do what’s right for your environment. I know… not very useful. Hopefully my heuristics will lead you to your answer.

Business Reasons

More often than not your company, organization, group and team have their structure. I’ve worked with companies and agencies that have had every sort of organically grown business structure. Some extremely independent, some centralized, and some ignorant to the structure entirely. We have to consider how you do business today and what will actually work and how we can fit into that system. What are the security requirements? What are the data retention rates? What is the disaster recovery strategy?

Technical Reasons

If I were to propose that every project have their own EFK stack to manage only their logs. A customer running 100+ project will have a LOT of redundancy and the overhead for a security team to manage and track logs could be prohibitory expensive/complicated. How does a security team monitor the creation of new projects, validated their access and ultimate ensure the security and compliance of the systems?

If I proposed one giant company-wide EFK stack it would lighten the burden for some but could cause data management and growth complications. Our security team is happy, because they have one log on to one server to see all the system and application logs being generated by the containers and applications.  Let me assume for a minute a non-common use case for OpenShift, batch processing. I want to use this platform to ETL function on a file I have stored out in S3. That project or job that live and die on my whim, might introduce long stale data into my logging system and tool chain. The point of a job is to run and be gone, so I might not care about the details.

While working as a US Army consultant in 2011, we were implementing Splunk. Working though the data ingest rates and figuring out what was good and stale data was complicated and we had fairly static workloads. Working though all the requirements will likely guide you the right direction. I suggest pruning what is important and measuring them often, high signal to noise ratio. This typically means smaller units or project based logging. It becomes quite daunting to measure every job, application and container in your environment on an ongoing basis. Off load that responsibility to the application and project owners.

Since I mentioned Splunk, I thought it is important to include the following section as well. ‘Configuring Fluentd to Send Logs to an External Log Aggregator’. You can configure Fluentd to send a copy of its logs to an external log aggregator, and not the default Elasticsearch, using the secure-forward plug-in. From there, you can further process log records after the locally hosted Fluentd has processed them [3].

REFERENCES

Fluentd Plug ins list
[1] https://www.fluentd.org/plugins/all

Logging at the node level
[2] https://kubernetes.io/docs/concepts/cluster-administration/logging/#logging-at-the-node-level

Aggregate Logging – OpenShift Docs
[3] https://docs.openshift.com/container-platform/3.3/install_config/aggregate_logging.html

Fluentd – OpenShift Commons Briefing
[4] https://blog.openshift.com/openshift-commons-briefing-72-cloud-native-logging-fluentd/

Intro to CloudForms Tags

This blog post was originally posted by myself on Blogger (8/26/2016).

CloudForms Intro

Red Hat CloudForms offers unified management for hybrid environments, providing a consistent experience and functionality across container ­based infrastructures, virtualization, private and public cloud platforms. Red Hat CloudForms enables enterprises to accelerate service delivery by providing self service, including complete operational and lifecycle management of the deployed services. It provides greater operational visibility through continuous discovery, monitoring and deep inspection of managed resources. And it ensures compliance and governance via automated policy enforcement and remediation. All the while, CloudForms is reducing operational costs, reducing or eliminating the manual processes that burden IT staff.

For more information visit http://redhat.com/cloudforms

Tags

I think tags are one of the most important features of Red Hat CloudForms. CloudForms ability to tag resources for later use in reporting, chargeback/showback and automation is critical for getting more in-depth knowledge and generating laser focused reports that provide value.

In this article I am going to touch on general guidelines I use when building a tag schema. I believe there are two rules when talking about CloudForms tags. It’s better too over tag your resources than under tag. If you can measure it; you can manage and monitor it. Just like any data structure, a well thought out schema will save you a lot of work.

Tag Schema Recommendations


Business Tags:

The most important thing about your business tag schema is they make sense to you and your companies. The examples I list below are a very rough estimation of what your business will look like and how it will operate. Think about logically grouped business resources and come up with a tag for them.

Business Unit
– Sales (North America)
– Engineering
– QA
– Marketing
– IT Development
– IT Operations
Business Project
– oVirt
– OpenStack
– Project Phoenix
Business Owner
– VP – Linux Torvalds
– Project Owner – Richard Stallman
– Manager of Marketing (East) – Doris Hopper 

IT Operations Tags

If you’re reading this blog post, these tags probably matter most to you. Remember, measure what matters most to you. Help the business understand your value and what you do. I know it, you hopefully know it, let them know it too.

Infrastructure

– VMWare

  – Production Systems

  – Development Lab

– Solid State Hard drive (SSD)

– Dell Hardware

Software

– Database

  – PostgreSQL 5

  – Oracle DB 11G

– Web Server

– CRM

SLA

– Diamond

– Gold

– Silver

– Bronze

Site (Geographic)
– New York Datacenter
– Hong Kong Datacenter
– Zone A – Virginia DC
– Zone B – Virginia DC

 

Change Tags

Imagine if IT and the business came to an agreement on service windows based on what worked for each business unit. This can happen. Maybe you want to have test deployments on production resources. Tagging change windows into your resources will help with reporting and also automation.

Patch Window

– Patch Window A (Second Tuesday of the month)
– Patch Window B (Last Sunday of the month)
– Canary Deployment Environment

SLA

A service level agreement (SLA) is a contract between a service provider (either internal or external) and the end user that defines the level of service expected from the service provider.

– Diamond

– Gold

– Silver

– Bronze

Security Tags

Security tagging is something I’m still working through. I know there is value in creating a Security Role, Group and Users for Dashboard and Reporting. I’m looking at linking this into Policies and exporting log events to a SIM (Security Information Management).

– Information Assurance Security Team Check

  – IA Validated
– IA Not Checked

– Service Catalog Provisioned (Provisioned Machines Certified Gold Master)

– Satellite Verified

Call to Action
I would love to find out what you are doing with your tagging schema. Contact me so we can discuss.