Supply chain attacks are extremely common but not always quick to be detected. This morning I saw an article about the AUR (Arch User Repository), a community-driven repository created and managed by Arch Linux users, hosting malicious software. The software in question was an orphaned PDF Viewer called “acroread”. This software would collect mess with systemd, collect system information and exfiltrate that data . For more information on that please read the hacker news article referenced.
The Supply Chain Problem
A supply chain is actually a complex and dynamic supply and demand network.
A supply chain is a system of organizations, people, activities, information, and resources involved in moving a product or service from supplier to customer.  A supply chain attack when when you exploit part of the process to change the end result.
I am guessing it’s because of the Gentoo  and Docker hub  attacks that everyone is looking at their community packages and we will see more of these reports coming out. With bitjacking there is a monetary incentive to hack a “trusted”, but in reality community, supply chain. This is the dark side of the open code movement.
Supply chain management and transparency is hard. Double that if you are a community supported entity that doesn’t make money or give guarantees. Do you spend time fixing bugs, adding features, reviewing pull requests, documenting old features better, document new features, demand/community generation, and so many more actions. When there is not a level of control but an implied trust… you are asking for problems. Ask yourself, would you run a binary from a random guy on the internet? If so, look inward.
I want to point that that this was discovered in the AUR (Arch User Repository), a community-driven repository created and managed by Arch Linux users, not official repositories like Arch Build System (ABS).
We think about supply chain management a lot at Red Hat. I would talk about this often when I was a Solution Architect. Red Hat, Fedora, Suse and Ubuntu deliver each have repositories that are controlled and backed by companies. This means they have control to the kingdom and you need to follow a specific process to get added. The AUR appears to have something similar but since it’s community based, they found a weakness in an orphaned package and use that as a jumping on point.
Don’t run some code a guy on the internet wrote… Get it from a trusted source. I’m looking at you snaps and juju… Unfortunately, scammers and state actors understand you want things ‘easy’. There is a contention between ease of use and security. Moving fast and moving safely. Over and over again we see systems get abused. Be mindful of your sources and hopefully Arch and Gentoo doesn’t take to much of a reputation hit from this, but hopefully they are more vigilant in the future.
Prior to moving into the role of Technical Marketing Manager for Red Hat’s Partner Solutions Initiative, I was responsible for selling the Red Hat Cloud Portfolio for Mid Market North America. I was the sole Architect covering the entire country. I was the first in the company with this title covering this market segment. People thought I was crazy and it was crazy that we would put focus into this market but I over achieved what was set in front of me.
Pubic cloud is just so easy and cheap, how could anyone selling “cloud” survive in this environment? Public cloud is as simple as swiping a credit card and you have a VM, but there are some use case that public cloud providers can’t solve. Below I tried to remove the industry focus terminology, because OpenStack is more than Telco and HPC. This is how I would pitch the advantage of Private Cloud to a CxO.
According to the latest figures from IDC, outlined by Network World , server shipments increased 20.7% YOY in quarter 1 of 2018. Revenue rose 38.6%. Let’s see if these trends continue. That’s an amazing growth rate for such a large industry.
The last thing I want to touch on from a personal note; I don’t think the kubernetes on OpenStack focus the OpenStack Foundation has held is as helpful or profitable for them as a whole. Whenever I see them demoed together kubernetes is doing all the interesting bits and OpenStack is just the pipes carrying the water. I can perform the same kubernetes demo swapping out OpenStack for public cloud or virtualization for that matter. Kubernetes could be a larger part of the demo and discussion of it’s aaS (as a Service) projects it uses, but Kubernetes steals the limelight. My $0.02.
Self-Managed CapEx Cloud
I have frequently run into customers that have business process problems with OpEx expenditures. Employee salaries are out of OpEx budget. If they go with AWS, they may literally be putting people out of a job or losing additional head count. I’ve encountered a customer that based their customer payment rates based on OpEx expenditures. This meant that they could spend $10,000 in CapEx and not bat an eye, but if they incurred $100 more of OpEx a month, that would ultimately make their the bills for their customers larger.
A typical customer buys cloud services by the drink, OpenStack along with a managed hosting partner or OEM allows that same customer to buy it by the pitcher. Make the cost a predictable CapEx expense. You choose how utilized you want your environment to be and have a defined cost for adding new resources to the environment. Help CIO and CFO sleep better at night knowing what the cost will be and not tossing and turning because their AWS bill looks like a phone number.
My opinion is based on the sheer number of companies doing managed OpenStack and Managed Red Hat OpenStack. Rackspace, IBM Bluemix, and Cisco Metacloud are hosting Red Hat OpenStack as a service. Dell, HPE, Super Micro, Cisco, NEC and Lenovo all have reference architectures for Red Hat OpenStack and may of them will do the racking, installing and configuring of Red Hat OpenStack before wheeling into your data center.
API Integrated Infrastructure
Customers are using external applications to provision both infrastructure and application, so APIs become the integration points. Common examples of this is using a IT service manager, a CI pipeline, or an orchestration tool as part of the application pipeline. A typical consumer of cloud services wants to provision compute, networking and storage all using an API, allowing them to use a familiar development language and tools. You can choose Ansible, heat templates, puppet modules or a number of other tools but you must choose something to scale your Development and Operations team. OpenStack offers an API-first approach to all the components, which virtualization like RHV or VMWare doesn’t. RHV offers only python bindings that don’t seem complete. For example, I don’t know if you could provision a new lun on your NetApp connected to RHV, via the API.
This comes back to everything being API driven. The customer can use OpenStack GUI, CLI or Restful API to provision resources. We also allow for Hyper-convergence of Compute and Storage on the same box. Cinder and Swift have connectors into so many different storage systems it becomes really convenient to manage resources in multiple ways and multiple boxes, but though a common tool. You don’t need to rely on expensive SANs and proprietary storage systems.
IaaS provider / Managed Service Provider (MSP)
If we have a customer managing infrastructure for their customers, OpenStack is a good model for this, because of the integration capabilities. This was a popular use case several years ago and many start ups and smaller integrator thought they could compete with the likes of Amazon or Azure. *Cough* Rackspace *Cough* Mirantis *Cough*. I think this is possible with OpenStack, but I think the investment required to start that business is cost prohibitive, who wants to spend Billions of dollars a year to find out? Didn’t think so.
Thinking on a smaller scale with more reasonable expectations, I think this use case has a lot of time left in them. I was working with a company sending satellites into space. In 2018 they intended to send 50 missions into space, each getting their own multi-purpose cloud computing platform to support any application driven workload the future throws at them. I was working with a company that is using OpenStack to deliver their own Software as a Service to internal and external customers. I have many more use case that are actually running production/workloads. This is what OpenStack was originally designed to do.
NFV as a Service
This is our sweet spot within Red Hat OpenStack. We have many telcos coming to us to do advanced NFV functions at bare metal speeds. However, I don’t think this is just a carrier use case. If you are running a hybrid cloud, having a platform that supports your specific NFV implementation in the public and private clouds opens the decision making process and fits well into a cloud exit plan.
Since everything is API driven and with the release of OSP 10, we now have composeable roles. We can scale any service (like networking) without having to scale other resources (Identity). RHV-M is a “fat” application and can’t be broken out to scale independent parts.
I hope this helps give you some ideas and that OpenStack isn’t “dead”. I’m interested to hear what you think.
Attendance this year in Vancouver was way down, about 3,500, from Boston last year. The energy level of the keynotes, sessions and hallway were muted. The expo hall was also much smaller than I have ever seen in my past 3 years attending the NA Summits.
The general focus of the OpenStack Foundation is on inclusion and moving up the stack. They are seeing what developers and cloud native customers are talking about to show how it runs on OpenStack. It feels genuine they are trying to help, but with the attention being so much on Kubernetes, OpenStack capabilities kept getting upstaged.
I’ve been in the OpenStack orbit for about 6 years now, starting with Essex. I started as a consultant building OpenStack clouds for development. I’ve helped organizations move smaller private clouds to production. Most recently I was helping customers adopt OpenStack. I want to see OpenStack succeed. I would love to see more/any hybrid cloud solutions and integrations points with other environments. How great would it be for Horizon to connect into your AWS or Azure console? What if Horizon integrated with Kubernetes dashboard? Instead, OpenStack Foundation says, let’s talk about multiple container solutions…
The general shrinking of attendance I don’t see necessarily as a sign of the end. I think there are many factors that caused this to nose dive:
- Convention fatigue
- May 2 – 4. Kubecon. Copenhagen, Denmark. Mass popularity in public and private cloud. International travel.
- May 8 – 10. Red Hat Summit. San Francisco, CA. Broader Open Source and Hybrid Cloud Focus.
- May 7 – 9. Microsoft Build. Seattle, WA. Broad industry topics and leading cloud provider.
- May 9 – 17. Pycon. Cleveland, OH. Python being the primary language OpenStack is written in, and the largest Python event in North America.
- May 8 – 10. Google I/O. Mountain View, CA.
- May 21 – 24. OpenStack Summit. Vancouver, BC. The potential attendees and sponsors of this event in the month of May alone probably wanted to attend multiple of the above events. The overlap is huge. Which would you rather attend, have a booth or present?
- Twice a year. There comes a time when your project isn’t the most innovative and changing beast in the enterprise, this is a good thing. Does OpenStack still need to release two versions a year? Does it need to have two conventions in a year? I’m not one to say but I’d love to hear what you think…
- International Travel. I live in the United States and traveling to Canada is not much of a problem, but it can be for some. If your company is not world wide or have a presence in Canada I have found there is a LOT of hesitation about sending your employees to any other country. Each company is going to put a different weight to this, but it is part of the equation. Also, Canada requires US Citizens to have a Passport, and that’s surprisingly uncommon for people to have in IT. You can’t just go to Canada on a whim and the Passport process can take 6 – 8 weeks or more.
- Vancouver, again. This is another minor point but something they should consider. Should we go somewhere new or go to a classic spot? It seems like in NA they are hitting the old haunts. Maybe, you should go to places where it’s not “Just another convention” like Chicago or Detroit. With it being centrally located it allows people coming from all over to fly for 2 – 3 hours instead of 5 – 8 hours. I got to San Francisco and Las Vegas to many times a year for conventions. Let’s try somewhere different.
I want to see OpenStack continue to succeed. They’ve made some missteps in the past trying to do new things. Kubernetes is cashing the immutable application check OpenStack wrote 6 years ago. The product is maturing and the hype is drying. However, I hope I’ve made clear that I think it’s not at all dead. It’s in an important phase of it’s life. Middle aged. Accept it. A fancy new convertible and black hair dye won’t change the fact you’ve grown up. That’s a good thing.
Videos to watch
At Red Hat Summit 2018, there was a lot of focus on Serverless of all sorts. I major new player, because S2I has existed as long as OpenShift, is Function as a Service on OpenShift. I’ve played with several , but we have settled on Apache OpenWhisk.
I can say most of my customers are on the public cloud, majority of them on AWS. Every one of my customers on public cloud are talking to me about Lambda. The key take away is that everyone is talking about, and not fully realizing the implications. If you commit into Lambda it’s committing deeply in AWS. The more functions you create the harder it is, even if the code is portable, to rebuild everything somewhere else. I’ve drawn an analogy that seems to resonate with me at least, “writing lambda functions is liking writing vendor specific hardware kernel integrations back in the 90s.”
The decision to change the platform becomes a much hard and longer choice, if you are not using Open platforms and products. That’s been the value of Open Source Platforms for 25 years.
Serverless on OpenShift Interactive Learning Platform
Chris Wright at Red Hat Summit 2018: Emerging technology and innovation
See FaaS in action along with the power of your portfolio of Middleware Applications running on multiple cloud. Make your code and data portable, make it Red Hat.
Functions-as-a-Service with OpenWhisk and Red Hat OpenShift
This is from the Red Hat Summit 2018 series.
I’m not the person to give you the best updated road map for RHEL and especially Container Linux. I’ve been focused in the multi-cloud Management and automation world for the past few years. A lot of developments have been afoot. Here are some amazing highlights with my commentary. I strongly suggest that you check it out the video below.
Immutable Foundation for OpenShift
The new Red Hat CoreOS (RHCOS) focus on several key areas. Red Hat Enterprise Linux and RHCOS are designed for different use cases and it’s important not to mix them. RHCOS will keep focused on one major thing… OpenShift/Kubernetes. RHCOS will be on the same cadence as Red Hat OpenShift Container Platform (RHOCP). This means when Kubenetes and OpenShift release, RHCOS will release. This allows engineering to keep the same speed and make changes necessary for new release of RHOSP. This also means that when we sunset specific version of RHOSP, it will also sunset the locked in version of RHCOS. Now we can keep the release coupled and supportable without having to drag RHEL before it’s ready.
There are so many more changes, but I don’t just want to copy slides. Hopefully you are intrigued enough to watch the video. Thanks for reading!
I spent last week in San Francisco attending Red Hat Summit 2018. This is my third Red Hat Summit in a row. The experience is always a blur and a whirlwind of activity from beginning to end. I am going to try and put together a few post this week of things I thought significant to the direction of Red Hat and the industry.
I had an amazing time meeting with friends and colleagues. I spent quite a bit of time helping customers at the Public Cloud booth. The booth was suppose to help customers understand the portability of their subscriptions to the public cloud via the Cloud Access Program but most of the focus for customers was running OpenShift on the public cloud.
A recap of Red Hat Summit 2018.
“Red Hat® OpenShift is a container application platform that brings Docker and Kubernetes to the enterprise. Regardless of your applications architecture, OpenShift lets you easily and quickly build, develop, and deploy in nearly any infrastructure, public or private. ” 
I’ve helped a lot of customers find their way to OpenShift. I’ve helped them develop and refine use case as well as figure out how it fits into their environment.
Other than having an unfortunate name I really like Cockroach Labs CockroachDB from the little I’ve used it. I am by far not an expert on CockroachDB. I first learned about it at OpenStack Summit 17 in Boston. Kudo to the guys for putting this presentation together and presenting it on the big stage.
What is CockroachDB?
“CockroachDB is a distributed SQL database built on a transactional and strongly-consistent key-value store. It scales horizontally; survives disk, machine, rack, and even datacenter failures with minimal latency disruption and no manual intervention; supports strongly-consistent ACID transactions; and provides a familiar SQL API for structuring, manipulating, and querying data.” 
There is a lot more to be said about the statement above that I am not going to cover here. To summarize, you can partition and replicate your database while still making sure queries are only getting the latest data, called “Strong Consistancy”.
On a personal level I like the notion of OpenShift and CockroachDB together for a couple reasons. Chiefly, they are both Open Source. You can experiment locally with no up front cost on a developer macine and eventually roll it into production. When you are ready for production both projects have Enterprise support offerings. It’s not critical you invest time, money and resources into figuring out if it’s going to work for you just to find out 6-9 months down the line it’s not what you hoped it would be. Experiment now! My next post will give you the framework to start….
2018 is upon us and I’ve renewed my interest in posting to this blog. I’ve been doing a ton of learning in 2017 and I think 2018 is going to be the year I start spreading that knowledge.
I’m always pushing myself to look for new ground or enhancing new concepts that we are familiar with already. Look for more of this in coming weeks. I think I have things to say about data storage (Databases) in the kubernetes space.
During a meeting it was brought up to me that the OpenShift/Kubernetes logging strategy isn’t very concise. Though looking into this I wanted to put some context around the technology. “How does OpenShift capture logs?” “What is captured and logged?” “What is my recommendations for using the logging system?”
EFK stands for Elastic Search (E), Fluentd (F), and Kibana (K). This is a modification on the traditional ELK stack that has become popular in recent years for log aggregation, collection and sorting. Kibana acts as the user interface for the collected logs. Elastic Search is the search and analytics engine. Fluentd is a unified logging system with hundreds (500+ as of the time if this writing ) of plugins.
What is captured
Looking thought the Kubernetes documentation, it’s made a bit more clear what is captured where and how applications logs are managed from the container level. In the section titled ‘Logging at the node level’  it is explained that “Everything a containerized application writes to stdout and stderr is handled and redirected somewhere by a container engine. For example, the Docker container engine redirects those two streams to a logging driver, which is configured in Kubernetes to write to a file in json format.” It is said in OpenShift documentation “Fluentd reads from /var/log/messages and /var/log/containers/.log for system logs and container logs, respectively. You can instead use the systemd journal as the log source. There are three deployer configuration parameters available in the deployer ConfigMap.” . For additional information and resources on Fluentd, I strongly recommend watching the ‘OpenShift Commons’ videos from May 17, 2017 .
Cluster wide -vs- Project logging
This is not a simple question to answer. I’m drawing upon my experience working with other clustered technologies and customers that have implemented OpenShift. My recommendation is to do what’s right for your environment. I know… not very useful. Hopefully my heuristics will lead you to your answer.
More often than not your company, organization, group and team have their structure. I’ve worked with companies and agencies that have had every sort of organically grown business structure. Some extremely independent, some centralized, and some ignorant to the structure entirely. We have to consider how you do business today and what will actually work and how we can fit into that system. What are the security requirements? What are the data retention rates? What is the disaster recovery strategy?
If I were to propose that every project have their own EFK stack to manage only their logs. A customer running 100+ project will have a LOT of redundancy and the overhead for a security team to manage and track logs could be prohibitory expensive/complicated. How does a security team monitor the creation of new projects, validated their access and ultimate ensure the security and compliance of the systems?
If I proposed one giant company-wide EFK stack it would lighten the burden for some but could cause data management and growth complications. Our security team is happy, because they have one log on to one server to see all the system and application logs being generated by the containers and applications. Let me assume for a minute a non-common use case for OpenShift, batch processing. I want to use this platform to ETL function on a file I have stored out in S3. That project or job that live and die on my whim, might introduce long stale data into my logging system and tool chain. The point of a job is to run and be gone, so I might not care about the details.
While working as a US Army consultant in 2011, we were implementing Splunk. Working though the data ingest rates and figuring out what was good and stale data was complicated and we had fairly static workloads. Working though all the requirements will likely guide you the right direction. I suggest pruning what is important and measuring them often, high signal to noise ratio. This typically means smaller units or project based logging. It becomes quite daunting to measure every job, application and container in your environment on an ongoing basis. Off load that responsibility to the application and project owners.
Since I mentioned Splunk, I thought it is important to include the following section as well. ‘Configuring Fluentd to Send Logs to an External Log Aggregator’. You can configure Fluentd to send a copy of its logs to an external log aggregator, and not the default Elasticsearch, using the secure-forward plug-in. From there, you can further process log records after the locally hosted Fluentd has processed them .
Fluentd Plug ins list
Logging at the node level
Aggregate Logging – OpenShift Docs
Fluentd – OpenShift Commons Briefing