Tag: Technology

How Sleep Number Eroded My Trust Though Bad Security Practices

I don’t like to and I don’t intend to post personal things on this blog in the future, but I am going to approach this incident I had with Sleep Number as a case study for how to design software and process incorrectly for a business. This has energized me do a series of thoughts on data integrity and other security concepts. Stay tuned. 

Tldr; I was a new customer, but after being scared by the following business and technology practices I am receiving a full refund.  I want to be totally clear here in why I chose to ask for a refund. First, Sleep Number not only has the ability to change historical records in their system, I will demonstrate evidence that they did so and have the ability to do so, leaving me with the only conclusion I can draw; this is a common practice without constraints. Second, the sum total on a transaction (order) is not derived from an application/algorithm that validates the math, but an arbitrary value at least editable by a human. Transactions (orders) and the field of assets are note dependant and thus are not required to match. This I saw on their system while working with a manager but didn’t violate their privacy by taking a picture of it, thus you will have to trust this assertion by me. Finally, Sleep Number doesn’t competently train their staff on their systems, policies and most of all data security.  I can’t provide evidence of this beyond my personal experience, but I hope providing evidence of the first two assertions will in fact support my second two claims. 

My conclusion and plea to Sleep Number customers is to keep a secured physical copy and scanned digital copy of your original receipt. As mentioned later in my post I repeatedly asked the following question that never really was addressed or got answered; “how can I trust that if I have a warranty problem or a billing dispute that someone from Sleep Number won’t edit (and thus falsify) the data to fulfill whatever they want it to say and deny me or over charge me?”

Let’s get into it… 

November 23, 2019

I had a good experience in a sleep number store. My wife and I spent probably about 2 hours in the store with our sales person. We chose to purchase the m7 360 queen bed, the queen 360 FlexFit 2, Mattress pad and Remote. After discounts and so on we the total came out to $5877.41. 

December 3, 2019

Delivery day. When the delivery men arrived they walked in and immediately said “the steps were too narrow.” They went outside to call and see if there was a split box spring and came back in a few minutes later holding someone else’s queen size box spring. I know it was someone else’s box spring because they told me and it looked well used with a large slit running down the center of it. They were not going to attempt it with mine because “it’s too heavy”. They took pictures and sent it to someone. That person identified themselves as a person working for Sleep Number. They said the workers couldn’t get it up the stairs and either they could leave me to do it or I could reject it and figure something else out. 

I’m not upset they couldn’t get the box spring up the steps, but it’s annoying didn’t really try. The workers didn’t attempt with “my” box spring but someone else’s discarded box spring. I don’t know if this is common practice but it doesn’t sit right with me as a customer. I wasn’t totally comfortable with them bring in someone else box spring, but they didn’t ask if it was “OK” or not. The other box spring could have I don’t know what on it. Things like bed bugs or fleas are the first thing to come to mind, but that’s not an exhaustive list. I don’t think it’s good practice and it doesn’t look good. Finally, I rejected the bed and the self identified Sleep Number representative told me that someone will contact me within the hour. It was approximately 3:30. 

At about 7:30 I jumped on chat on the Sleep Number website. I told the person the situation and they told me to call in. That was it. In no way helpful. The person was professional and such but if the answer is to “call customer support” it’s useless. I said so on my survey. I called my salesperson Andy giving him an update on the situation and letting him know that no one had called me. He said he was going to “rally the troops” and to expect a call soon. That call didn’t come. I understand a lot of this is more customer service problem and irrelevant to the facts at hand, but this is an edited version of the letter I sent to Customer Support, so I am including the total fail. 

December 4th, 2019

In the morning, I checked my orders on the website to validate to see if anything had changed. The order number is 95007971341, showing on Nov 23 the purchase of the items mentioned above and the total matching what is on my paper receipt (see redacted version below [1]). All good there at that point in time. Somewhere in the ballpark of 2 pm, I contact the store asking for my sales person. He was off but the store manager was working and he confirmed that he saw an email go out to customer care to contact me about options. He didn’t contact me because he assumed someone had done so in the evening when he was off work, but no one had contacted me. He said he would “get right on it”. About 20 minutes later someone DID call me, finally. About 24 hours after rejecting the delivery, I might add. However, I talked with the woman for a little bit. She didn’t so much offer me options but she was trying to be helpful. She listened to the situation and offered to talk to a manager to get me a deal on a potential king size bed and that she would call me back with the figures. 


I am going to digress for a moment. I wanted a queen size bed because we don’t really have room for a King. It would be nice, but it also makes the room cramped because of where the attic stairs come down. I DIDN’T want a King but I was willing to go to a King to remain a customer if the price was right. It was more of a space issue, but spending $2000 more on a bed size you don’t want is a hard sell. That’s nearly a ⅓ more than the original price. 

While the Customer Care person was running the numbers offline, the White Marsh Store Manager calls me back to make sure that someone contacted me. We continued to chat a bit about the situation and he thought that he could come up with some different options and that may be more cost sensitive. He listened to what I said and he took that feedback into account.  He asked to have time to review options and put numbers together. All good.

Shortly after getting off the phone with the store manager, customer care calls back to discuss the one option they put together. I found out later that they put together the most expensive option and were trying to discount some of it down, but the price was still going to be about $1700 more than what I originally paid for a bed size I don’t want. She offered to let me think about it for a while and to call back in if I want to do it, or they can cancel the order. OK. Not the best situation but not the worse experience. 

Let it be noted that part of the reason for buying the bed was the two way movement. I was repeatedly offered split queen box springs that are just flat, but for us it was part of the package. We wanted that part or none of it. No movable frame, no deal on my part. In the 24 hour wait before someone called me back, I did research and I found Saatva, a competitor that has a split queen adjustable bed and a sleep number like mattress. I didn’t know they existed until I was ignored and started looking at my options. They have almost exactly what I want. However, the m7 mattress we really like and tested it out. It’s what we wanted. 

The manager did have options that were more cost mindful. He offered us an adjustable box spring that had less features, the whole top and bottom move together, but the additional cost was less than $900. I told him I was cooking a birthday dinner for my daughter who turned 7 yesterday and I will think about it and let him know. After dinner I wanted to print a copy of my receipt from the website and something seemed strange… 

The Data Integrity Problem of Dec 4 2019

On the website I had 2 orders, one with the original order number from Nov 23 and one from Dec 4 (the day in question). The electronic receipt for the Nov 23 had only 2 items on it and totaled $7,565.21. This doesn’t seem right for the two items (Total protection mattress pad and 12-button smart bed remote, which has the price crossed out saying FREE. The price of the mattress pad was $219.99. They do not total $7565.21 nor does it match the paper receipt I have of $5877.41). Mind you, this still has the date of Nov 23 and no record of my original order. The second order when clicked would take me back to the home page… IF I didn’t have my paper receipt feel like I could be in a bad situation.  I decided that it would be best to talk to the store manager and make sure the deal matched what we had and to sort out this problem. If the configuration of the mattress matched we were going to spend the extra, even though it creates a sub optimal situation, because we really wanted the mattress. 


I headed to the store ready to check on things and spend the money. I talked with the manager for a while and I validated it was the mattress we wanted and it had the same or comparable features, then we started to discuss the order changes. He checked the order in the store and couldn’t find a digital copy of the original receipt. He implied that they had a printed copy, but we used my copy as the master receipt. I want to reiterate that the date on the existing receipt still said Nov 23, which is changing history. At this point, we validated the order ID was the same and we thought maybe the previous order changed but they assigned the assets to me. We checked the assets section and that didn’t match what I had. 

I want to be totally clear here in why I chose to ask for a refund at this point. Frist, the digital records of my transaction were modified by someone, without my permission, destroying the previous record trustworthiness. The assets I was said to own, didn’t match the assets I was supposed to own. The math in the order didn’t add up (remember the $219.99 for mattress pad + the $0 for the remote) to $7565.21. Nothing about my order was the same nor was it consistent internally, let alone externally compared to my physical receipt. If historical orders can be modified, assets can be inaccurately assigned and totals don’t have to add up, nothing can be trusted. I asked this question to everyone I spoke to that night from customer care and I am asking you the reader right now; taking into account all of the above how can I trust that if I have a warranty problem or a billing dispute that someone from Sleep Number won’t edit the data to fulfill whatever they want it to say and deny me or over charge me? If I chose to stick with you I feel I would need to take notes to protect myself from you,  the company I am trying to purchase from. That is literally the worst possible customer experience I can imagine. Having to protect myself from a seller.

The rest of the experience wasn’t great either. I spoke to 4 different people, 3 of which were “supervisors”. The first person I talked to more or less just transferred me after hearing the beginning of my experience. I spoke to the first supervisor who said that since I physically took the mattress pad on Nov 23, that it could not be returned and I wouldn’t be refunded for it. Which doesn’t make sense, because I didn’t know I had the option of waiting AND I was intending on getting the bed, which now I wasn’t. It was unopened and I was standing in the store with store manager confirming he had it. So they transferred me to person 3. 

3rd person identified themselves as a supervisor during introduction. No notes from the previous 2 people nor the change in my order, nor my issues with pricing were added to any notes. A cold transfer. The previous person, without my permission, issued a refund for the items sans the mattress pad. So when we started looking at the order it didn’t show anything but a mattress pad. I didn’t want 2 refunds, I was expecting 1 refund. Completing transactions without consent or knowledge just adds to the laundry list of problems but makes me feel violated as a customer and proves again I can’t trust people at Sleep Number to not change things without consent or record of the change. This woman was nice and helpful but she was asking me things about your system that I can’t possibly know and that put me in a really uncomfortable and scary situation. She mentioned to me at one point that she stills see the mattress pad and the remote on my account, but it says they have been refunded; except there is an X mark next to them so she could delete them. Then she asked ME what she should do. How am I supposed to know what that does? I don’t want to say “yes” because it might remove it and I may never get a refund on it. That may delete it from my history and I don’t even know what that would mean in this context. She eventually threw in the metaphorical towel and said I should talk to her supervisor. “The queue has a 3 minute wait time” she said… 

30 minutes later, literally past the store closing time, the next person picks up. The store manager originally called on the store phone, we didn’t want to disconnect and possibly lose all the progress. The store manager was a real professional about it. He didn’t mention the store closing. He was apologetic. He never once put time pressure on me. This is why I wanted to stick with you guys, but the customer care is a disaster. 

Finally I reached the 4th and final person of the night. We had a mattress cover that needed to be removed and various things on the account that were wrong and so on. The final person did see the last person’s notes, but we were mostly at an impasse. She talked to the store manager and ultimately did a refund and return on the mattress pad. She said a confirmation email could take days, but she sent me a pdf version directly. 

We finally reached another problem with the nonsense way your system works, she couldn’t send me any confirmation that I would be refunded the remaining amount. She assured me it would be paid back into the finance CC within 7 -10 business days, but can’t give me any sort of confirmation. So I have trust that Sleep Number is going to pay me after all of this nonsense. I have negative trust in Sleep Number ability to manage and handle data. I have 0 confidence I will actually get paid. I have great confidence that data is being manipulated without customer consent or knowledge. I just wanted to buy a bed and you now have me taking screenshots of receipts and hoarding documents because your reality has proven to be changeable. 

I don’t even know if anyone will read this or understand the gravity of how negative an experience this has been for me personally and financially. I hope this makes it to the desk of an executive so they can understand what happened and change something to fix this for someone else. I went into a store ready to spend a ⅓ more for a bed I didn’t want but left with a refund. 

Every step of the way my trust was eroded and it ultimately lead to total disappointment. 

To continue the drama and story, I wrote the above in a google doc and I was intending on sending it to a customer feedback email. I couldn’t find an email on the website and the email on the business card I had blocks emails. So I went to chat gain… Chat explained that they don’t offer an email. In 2019, they don’t offer email nor a feedback submission form or anything really. I, having worked as a project manager, love to get direct feedback from customers. The person on chat said give him feedback and he will get it to leaders. So I proceeded to split this up and past it into a chat window… 

After realizing that I am talking into the void and it’s not likely going anywhere I figured I’d hit social media. I sent my feedback letter as a message on Facebook, Twitter DM and finally the only email address I had gotten from them. Facebook was the fastest to respond, but I did get response for all of them, to their credit. I got a personal phone call from the last person I spoke to from Sleep Number, saying she received the note and that she is meeting with an executive in a week and will take the feedback directly to them. We shall see if that changes anything. If an executive reaches out to me, I will post an update to the dramatic part of the story. 

Finally, in between the time of me writing this and posting it… there is yet a third version of the same order number floating out there. I took a screen cap from my computer showing the date. This shows as if I am still being charged for this time… I guess we will have to see if they ever give me my money back. I have no reason to trust their system or policies and a lot of reasons not to trust them…


Look out for future post from me on data security and integrity.

Burn mp4 video files to DVD on Fedora

Burn mp4 video files to DVD on Fedora


My wife and I recently went on a trip to Key Largo, FL for our 5 year wedding anniversary. We took a bio-tour of the area and I took some video on my phone of manatees. When I downloaded it from Google Photos it downloaded as an mp4 file. We wanted to show some of the videos to a friend and they wanted it in DVD format. I figured I would hit Google and just find out what program I need to convert and burn for me. What I found wasn’t so easy…

The best I can tell, in April 2019, there is still not an end-to-end documented way of going from a mp4 or mkv to a full DVD. This seems extremely strange to me. I typically keep my blog to more esoteric topics, but I am going to document this once and for all.

My System: I am running the Fedora 29; current as of time of this writing. Most of the CLI tools I am going to use should be standard or available in any general purpose Linux distribution. The only non-standard tool required is ffmpeg and dvdauthor. Both packages are in the RPM Fusion repos with instructions.


Convert mp4/mkv to vob file

$ ffmpeg -i myfile.mp4 -target ntsc-dvd mydvd.vob

In the directory where you created the mydvd.vob file create an XML file called video.xml with these contents:

<pgc pause="0">
<vob file="mydvd.vob" chapters="0"/>

Next run the following commands to create the ISO for the DVD:

$ mkdir dvd
$ dvdauthor -o dvd -x video.xml
$ mkisofs -dvd-video -o dvdimage.iso dvd/

Next check and note the device we used for burning:

$ wodim --devices
wodim: Overview of accessible drives (1 found) :
0 dev='/dev/sg1' rwrw-- : 'HL-DT-ST' 'DVDRAM GP65NB60'

Burn the ISO to DVD:

$ wodim -eject -tao speed=2 dev=/dev/sg1 -v -data dvdimage.iso


I’m going to try and diversify my blog a little bit more and focus on areas not so esoteric. Look for more smaller post coming soon.





Supply Chain Attacks

Supply Chain Attacks

The News

Supply chain attacks are extremely common but not always quick to be detected. This morning I saw an article about the AUR (Arch User Repository), a community-driven repository created and managed by Arch Linux users, hosting malicious software. The software in question was an orphaned PDF Viewer called “acroread”. This software would collect mess with systemd, collect system information and exfiltrate that data [1]. For more information on that please read the hacker news article referenced.


The Supply Chain Problem

A supply chain is actually a complex and dynamic supply and demand network.
A supply chain is a system of organizations, people, activities, information, and resources involved in moving a product or service from supplier to customer. [5] A supply chain attack when when you exploit part of the process to change the end result.

I am guessing it’s because of the Gentoo [2] and Docker hub [3] attacks that everyone is looking at their community packages and we will see more of these reports coming out. With bitjacking there is a monetary incentive to hack a “trusted”, but in reality community, supply chain. This is the dark side of the open code movement.

Supply chain management and transparency is hard. Double that if you are a community supported entity that doesn’t make money or give guarantees. Do you spend time fixing bugs, adding features, reviewing pull requests, documenting old features better, document new features, demand/community generation, and so many more actions. When there is not a level of control but an implied trust… you are asking for problems. Ask yourself, would you run a binary from a random guy on the internet? If so, look inward.

I want to point that that this was discovered in the AUR (Arch User Repository), a community-driven repository created and managed by Arch Linux users, not official repositories like Arch Build System (ABS).

We think about supply chain management a lot at Red Hat. I would talk about this often when I was a Solution Architect. Red Hat, Fedora, Suse and Ubuntu deliver each have repositories that are controlled and backed by companies. This means they have control to the kingdom and you need to follow a specific process to get added. The AUR appears to have something similar but since it’s community based, they found a weakness in an orphaned package and use that as a jumping on point.

Don’t run some code a guy on the internet wrote… Get it from a trusted source. I’m looking at you snaps and juju… Unfortunately, scammers and state actors understand you want things ‘easy’. There is a contention between ease of use and security. Moving fast and moving safely. Over and over again we see systems get abused. Be mindful of your sources and hopefully Arch and Gentoo doesn’t take to much of a reputation hit from this, but hopefully they are more vigilant in the future.



[1] https://thehackernews.com/2018/07/arch-linux-aur-malware.html?m=1

[2] https://thehackernews.com/2018/06/gentoo-linux-github.html

[3] https://www.bleepingcomputer.com/news/security/17-backdoored-docker-images-removed-from-docker-hub/

[4] https://wiki.archlinux.org/index.php/Arch_User_Repository

[5] https://en.wikipedia.org/wiki/Supply_chain

OpenStack Use Cases

OpenStack Use Cases

Prior to moving into the role of Technical Marketing Manager for Red Hat’s Partner Solutions Initiative, I was responsible for selling the Red Hat Cloud Portfolio for Mid Market North America. I was the sole Architect covering the entire country.  I was the first in the company with this title covering this market segment. People thought I was crazy and it was crazy that we would put focus into this market but I over achieved what was set in front of me.

Pubic cloud is just so easy and cheap, how could anyone selling “cloud” survive in this environment? Public cloud is as simple as swiping a credit card and you have a VM, but there are some use case that public cloud providers can’t solve. Below I tried to remove the industry focus terminology, because OpenStack is more than Telco and HPC. This is  how I would pitch the advantage of Private Cloud to a CxO.

According to the latest figures from IDC, outlined by Network World [1], server shipments increased 20.7% YOY in quarter 1 of 2018. Revenue rose 38.6%. Let’s see if these trends continue. That’s an amazing growth rate for such a large industry.

The last thing I want to touch on from a personal note; I don’t think the kubernetes on OpenStack focus the OpenStack Foundation has held is as helpful or profitable for them as a whole. Whenever I see them demoed together kubernetes is doing all the interesting bits and OpenStack is just the pipes carrying the water. I can perform the same kubernetes demo swapping out OpenStack for public cloud or virtualization for that matter. Kubernetes could be a larger part of the demo and discussion of it’s aaS (as a Service) projects it uses, but Kubernetes steals the limelight. My $0.02.


Self-Managed CapEx Cloud

I have frequently run into customers that have business process problems with OpEx expenditures. Employee salaries are out of OpEx budget. If they go with AWS, they may literally be putting people out of a job or losing additional head count. I’ve encountered a customer that based their customer payment rates based on OpEx expenditures. This meant that they could spend $10,000 in CapEx and not bat an eye, but if they incurred $100 more of OpEx a month, that would ultimately make their the bills for their customers larger.

A typical customer buys cloud services by the drink, OpenStack along with a managed hosting partner or OEM allows that same customer to buy it by the pitcher. Make the cost a predictable CapEx expense. You choose how utilized you want your environment to be and have a defined cost for adding new resources to the environment. Help CIO and CFO sleep better at night knowing what the cost will be and not tossing and turning because their AWS bill looks like a phone number.

My opinion is based on the sheer number of companies doing managed OpenStack and Managed Red Hat OpenStack. Rackspace, IBM Bluemix, and Cisco Metacloud are hosting Red Hat OpenStack as a service. Dell, HPE, Super Micro, Cisco, NEC and Lenovo all have reference architectures for Red Hat OpenStack and may of them will do the racking, installing and configuring of Red Hat OpenStack before wheeling into your data center.

API Integrated Infrastructure

Customers are using external applications to provision both infrastructure and application, so APIs become the integration points. Common examples of this is using a IT service manager, a CI pipeline, or an orchestration tool as part of the application pipeline. A typical consumer of cloud services wants to provision compute, networking and storage all using an API, allowing them to use a familiar development language and tools. You can choose Ansible, heat templates, puppet modules or a number of other tools but you must choose something to scale your Development and Operations team. OpenStack offers an API-first approach to all the components, which virtualization like RHV or VMWare doesn’t. RHV offers only python bindings that don’t seem complete. For example, I don’t know if you could provision a new lun on your NetApp connected to RHV, via the API.

Converged Datacenter

This comes back to everything being API driven. The customer can use OpenStack GUI, CLI or Restful API to provision resources. We also allow for Hyper-convergence of Compute and Storage on the same box. Cinder and Swift have connectors into so many different storage systems it becomes really convenient to manage resources in multiple ways and multiple boxes, but though a common tool. You don’t need to rely on expensive SANs and proprietary storage systems.

IaaS provider / Managed Service Provider (MSP)

If we have a customer managing infrastructure for their customers, OpenStack is a good model for this, because of the integration capabilities. This was a popular use case several years ago and many start ups and smaller integrator thought they could compete with the likes of Amazon or Azure. *Cough* Rackspace *Cough* Mirantis *Cough*. I think this is possible with OpenStack, but I think the investment required to start that business is cost prohibitive, who wants to spend Billions of dollars a year to find out? Didn’t think so.

Thinking on a smaller scale with more reasonable expectations, I think this use case has a lot of time left in them. I was working with a company sending satellites into space. In 2018 they intended to send 50 missions into space, each getting their own multi-purpose cloud computing platform to support any application driven workload the future throws at them. I was working with a company that is using OpenStack to deliver their own Software as a Service to internal and external customers. I have many more use case that are actually running production/workloads. This is what OpenStack was originally designed to do.


NFV as a Service

This is our sweet spot within Red Hat OpenStack. We have many telcos coming to us to do advanced NFV functions at bare metal speeds. However, I don’t think this is just a carrier use case. If you are running a hybrid cloud, having a platform that supports your specific NFV implementation in the public and private clouds opens the decision making process and fits well into a cloud exit plan.

Asynchronous Scaling

Since everything is API driven and with the release of OSP 10, we now have composeable roles. We can scale any service (like networking) without having to scale other resources (Identity). RHV-M is a “fat” application and can’t be broken out to scale independent parts.
I hope this helps give you some ideas and that OpenStack isn’t “dead”. I’m interested to hear what you think.


[1] https://www.networkworld.com/article/3278269/data-center/suddenly-the-server-market-is-hot-again.html

OpenStack Summit – 2018 – Vancouver

OpenStack Summit – 2018 – Vancouver


Attendance this year in Vancouver was way down, about 3,500, from Boston last year. The energy level of the keynotes, sessions and hallway were muted. The expo hall was also much smaller than I have ever seen in my past 3 years attending the NA Summits.

The general focus of the OpenStack Foundation is on inclusion and moving up the stack. They are seeing what developers and cloud native customers are talking about to show how it runs on OpenStack. It feels genuine they are trying to help, but with the attention being so much on Kubernetes, OpenStack capabilities kept getting upstaged.

I’ve been in the OpenStack orbit for about 6 years now, starting with Essex. I started as a consultant building OpenStack clouds for development. I’ve helped organizations move smaller private clouds to production. Most recently I was helping customers adopt OpenStack. I want to see OpenStack succeed. I would love to see more/any hybrid cloud solutions and integrations points with other environments. How great would it be for Horizon to connect into your AWS or Azure console? What if Horizon integrated with Kubernetes dashboard? Instead, OpenStack Foundation says, let’s talk about multiple container solutions…

The general shrinking of attendance I don’t see necessarily as a sign of the end. I think there are many factors that caused this to nose dive:

  1. Convention fatigue
    • May 2 – 4. Kubecon. Copenhagen, Denmark. Mass popularity in public and private cloud. International travel.
    • May 8 – 10. Red Hat Summit. San Francisco, CA. Broader Open Source and Hybrid Cloud Focus.
    • May 7 – 9. Microsoft Build. Seattle, WA. Broad industry topics and leading cloud provider.
    • May 9 – 17. Pycon. Cleveland, OH. Python being the primary language OpenStack is written in, and the largest Python event in North America.
    • May 8 – 10. Google I/O. Mountain View, CA.
    • May 21 – 24. OpenStack Summit. Vancouver, BC. The potential attendees and sponsors of this event in the month of May alone probably wanted to attend multiple of the above events. The overlap is huge. Which would you rather attend, have a booth or present?
  2. Twice a year. There comes a time when your project isn’t the most innovative and changing beast in the enterprise, this is a good thing. Does OpenStack still need to release two versions a year? Does it need to have two conventions in a year? I’m not one to say but I’d love to hear what you think…
  3. International Travel. I live in the United States and traveling to Canada is not much of a problem, but it can be for some. If your company is not world wide or have a presence in Canada I have found there is a LOT of hesitation about sending your employees to any other country. Each company is going to put a different weight to this, but it is part of the equation. Also, Canada requires US Citizens to have a Passport, and that’s surprisingly uncommon for people to have in IT. You can’t just go to Canada on a whim and the Passport process can take 6 – 8 weeks or more.
  4. Vancouver, again. This is another minor point but something they should consider. Should we go somewhere new or go to a classic spot? It seems like in NA they are hitting the old haunts. Maybe, you should go to places where it’s not “Just another convention” like Chicago or Detroit. With it being centrally located it allows people coming from all over to fly for 2 – 3 hours instead of 5 – 8 hours. I got to San Francisco and Las Vegas to many times a year for conventions. Let’s try somewhere different.

I want to see OpenStack continue to succeed. They’ve made some missteps in the past trying to do new things. Kubernetes is cashing the immutable application check OpenStack wrote 6 years ago. The product is maturing and the hype is drying. However, I hope I’ve made clear that I think it’s not at all dead. It’s in an important phase of it’s life. Middle aged. Accept it. A fancy new convertible and black hair dye won’t change the fact you’ve grown up. That’s a good thing.


Videos to watch

Business Focus

Is the public cloud really eating OpenStack’s lunch?

OpenStack for AWS Architects



Technology Focus

OpenStack upgrade strategy the fast forward upgrade


Root your OpenStack on a solid foundation of leaf spine architecture

Integrating keystone with large scale centralized authentication

Leveraging serverless functions in heat to deliver anything as a service-xaas

The illusion of infinite capacity part 2 – over subscription

Free software needs free tools

Workload optimized OpenStack made easy

Engineering container security

Intro to container security

Container Linux and RHEL: The road ahead

This is from the Red Hat Summit 2018 series.

I’m not the person to give you the best updated road map for RHEL and especially Container Linux. I’ve been focused in the multi-cloud Management and automation world for the past few years. A lot of developments have been afoot. Here are some amazing highlights with my commentary. I strongly suggest that you check it out the video below.

Immutable Foundation for OpenShift

The new Red Hat CoreOS (RHCOS) focus on several key areas. Red Hat Enterprise Linux and RHCOS are designed for different use cases and it’s important not to mix them. RHCOS will keep focused on one major thing… OpenShift/Kubernetes. RHCOS will be on the same cadence as Red Hat OpenShift Container Platform (RHOCP). This means when Kubenetes and OpenShift release, RHCOS will release. This allows engineering to keep the same speed and make changes necessary for new release of RHOSP. This also means that when we sunset specific version of RHOSP, it will also sunset the locked in version of RHCOS. Now we can keep the release coupled and supportable without having to drag RHEL before it’s ready.

There are so many more changes, but I don’t just want to copy slides. Hopefully you are intrigued enough to watch the video. Thanks for reading!

OpenShift/Kubernetes Logging Overview

OpenShift/Kubernetes Logging Overview

During a meeting it was brought up to me that the OpenShift/Kubernetes logging strategy  isn’t very concise. Though looking into this I wanted to put some context around the technology. “How does OpenShift capture logs?” “What is captured and logged?” “What is my recommendations for using the logging system?”

EFK Stack

EFK stands for Elastic Search (E), Fluentd (F),  and Kibana (K). This is a modification on the traditional ELK stack that has become popular in recent years for log aggregation, collection and sorting. Kibana acts as the user interface for the collected logs. Elastic Search is the search and analytics engine. Fluentd is a unified logging system with hundreds (500+ as of the time if this writing [1]) of plugins.

What is captured

Looking thought the Kubernetes documentation, it’s made a bit more clear what is captured where and how applications logs are managed from the container level. In the section titled ‘Logging at the node level’ [2]  it is explained that “Everything a containerized application writes to stdout and stderr is handled and redirected somewhere by a container engine. For example, the Docker container engine redirects those two streams to a logging driver, which is configured in Kubernetes to write to a file in json format.” It is said in OpenShift documentation “Fluentd reads from /var/log/messages and /var/log/containers/.log for system logs and container logs, respectively. You can instead use the systemd journal as the log source. There are three deployer configuration parameters available in the deployer ConfigMap.” [3]. For additional information and resources on Fluentd, I strongly recommend watching the ‘OpenShift Commons’ videos from May 17, 2017 [4].

Cluster wide -vs- Project logging

This is not a simple question to answer. I’m drawing upon my experience working with other clustered technologies and customers that have implemented OpenShift. My recommendation is to do what’s right for your environment. I know… not very useful. Hopefully my heuristics will lead you to your answer.

Business Reasons

More often than not your company, organization, group and team have their structure. I’ve worked with companies and agencies that have had every sort of organically grown business structure. Some extremely independent, some centralized, and some ignorant to the structure entirely. We have to consider how you do business today and what will actually work and how we can fit into that system. What are the security requirements? What are the data retention rates? What is the disaster recovery strategy?

Technical Reasons

If I were to propose that every project have their own EFK stack to manage only their logs. A customer running 100+ project will have a LOT of redundancy and the overhead for a security team to manage and track logs could be prohibitory expensive/complicated. How does a security team monitor the creation of new projects, validated their access and ultimate ensure the security and compliance of the systems?

If I proposed one giant company-wide EFK stack it would lighten the burden for some but could cause data management and growth complications. Our security team is happy, because they have one log on to one server to see all the system and application logs being generated by the containers and applications.  Let me assume for a minute a non-common use case for OpenShift, batch processing. I want to use this platform to ETL function on a file I have stored out in S3. That project or job that live and die on my whim, might introduce long stale data into my logging system and tool chain. The point of a job is to run and be gone, so I might not care about the details.

While working as a US Army consultant in 2011, we were implementing Splunk. Working though the data ingest rates and figuring out what was good and stale data was complicated and we had fairly static workloads. Working though all the requirements will likely guide you the right direction. I suggest pruning what is important and measuring them often, high signal to noise ratio. This typically means smaller units or project based logging. It becomes quite daunting to measure every job, application and container in your environment on an ongoing basis. Off load that responsibility to the application and project owners.

Since I mentioned Splunk, I thought it is important to include the following section as well. ‘Configuring Fluentd to Send Logs to an External Log Aggregator’. You can configure Fluentd to send a copy of its logs to an external log aggregator, and not the default Elasticsearch, using the secure-forward plug-in. From there, you can further process log records after the locally hosted Fluentd has processed them [3].


Fluentd Plug ins list
[1] https://www.fluentd.org/plugins/all

Logging at the node level
[2] https://kubernetes.io/docs/concepts/cluster-administration/logging/#logging-at-the-node-level

Aggregate Logging – OpenShift Docs
[3] https://docs.openshift.com/container-platform/3.3/install_config/aggregate_logging.html

Fluentd – OpenShift Commons Briefing
[4] https://blog.openshift.com/openshift-commons-briefing-72-cloud-native-logging-fluentd/

Intro to CloudForms Tags

This blog post was originally posted by myself on Blogger (8/26/2016).

CloudForms Intro

Red Hat CloudForms offers unified management for hybrid environments, providing a consistent experience and functionality across container ­based infrastructures, virtualization, private and public cloud platforms. Red Hat CloudForms enables enterprises to accelerate service delivery by providing self service, including complete operational and lifecycle management of the deployed services. It provides greater operational visibility through continuous discovery, monitoring and deep inspection of managed resources. And it ensures compliance and governance via automated policy enforcement and remediation. All the while, CloudForms is reducing operational costs, reducing or eliminating the manual processes that burden IT staff.

For more information visit http://redhat.com/cloudforms


I think tags are one of the most important features of Red Hat CloudForms. CloudForms ability to tag resources for later use in reporting, chargeback/showback and automation is critical for getting more in-depth knowledge and generating laser focused reports that provide value.

In this article I am going to touch on general guidelines I use when building a tag schema. I believe there are two rules when talking about CloudForms tags. It’s better too over tag your resources than under tag. If you can measure it; you can manage and monitor it. Just like any data structure, a well thought out schema will save you a lot of work.

Tag Schema Recommendations

Business Tags:

The most important thing about your business tag schema is they make sense to you and your companies. The examples I list below are a very rough estimation of what your business will look like and how it will operate. Think about logically grouped business resources and come up with a tag for them.

Business Unit
– Sales (North America)
– Engineering
– QA
– Marketing
– IT Development
– IT Operations
Business Project
– oVirt
– OpenStack
– Project Phoenix
Business Owner
– VP – Linux Torvalds
– Project Owner – Richard Stallman
– Manager of Marketing (East) – Doris Hopper 

IT Operations Tags

If you’re reading this blog post, these tags probably matter most to you. Remember, measure what matters most to you. Help the business understand your value and what you do. I know it, you hopefully know it, let them know it too.


– VMWare

  – Production Systems

  – Development Lab

– Solid State Hard drive (SSD)

– Dell Hardware


– Database

  – PostgreSQL 5

  – Oracle DB 11G

– Web Server



– Diamond

– Gold

– Silver

– Bronze

Site (Geographic)
– New York Datacenter
– Hong Kong Datacenter
– Zone A – Virginia DC
– Zone B – Virginia DC


Change Tags

Imagine if IT and the business came to an agreement on service windows based on what worked for each business unit. This can happen. Maybe you want to have test deployments on production resources. Tagging change windows into your resources will help with reporting and also automation.

Patch Window

– Patch Window A (Second Tuesday of the month)
– Patch Window B (Last Sunday of the month)
– Canary Deployment Environment


A service level agreement (SLA) is a contract between a service provider (either internal or external) and the end user that defines the level of service expected from the service provider.

– Diamond

– Gold

– Silver

– Bronze

Security Tags

Security tagging is something I’m still working through. I know there is value in creating a Security Role, Group and Users for Dashboard and Reporting. I’m looking at linking this into Policies and exporting log events to a SIM (Security Information Management).

– Information Assurance Security Team Check

  – IA Validated
– IA Not Checked

– Service Catalog Provisioned (Provisioned Machines Certified Gold Master)

– Satellite Verified

Call to Action
I would love to find out what you are doing with your tagging schema. Contact me so we can discuss.