Burn mp4 video files to DVD on Fedora

Burn mp4 video files to DVD on Fedora

Overview

My wife and I recently went on a trip to Key Largo, FL for our 5 year wedding anniversary. We took a bio-tour of the area and I took some video on my phone of manatees. When I downloaded it from Google Photos it downloaded as an mp4 file. We wanted to show some of the videos to a friend and they wanted it in DVD format. I figured I would hit Google and just find out what program I need to convert and burn for me. What I found wasn’t so easy…

The best I can tell, in April 2019, there is still not an end-to-end documented way of going from a mp4 or mkv to a full DVD. This seems extremely strange to me. I typically keep my blog to more esoteric topics, but I am going to document this once and for all.

My System: I am running the Fedora 29; current as of time of this writing. Most of the CLI tools I am going to use should be standard or available in any general purpose Linux distribution. The only non-standard tool required is ffmpeg and dvdauthor. Both packages are in the RPM Fusion repos with instructions.

Process

Convert mp4/mkv to vob file

$ ffmpeg -i myfile.mp4 -target ntsc-dvd mydvd.vob

In the directory where you created the mydvd.vob file create an XML file called video.xml with these contents:

<dvdauthor>
<vmgm>
<menus>
<video/>
<audio/>
</menus>
</vmgm>
<titleset>
<titles>
<pgc pause="0">
<vob file="mydvd.vob" chapters="0"/>
</pgc>
</titles>
</titleset>
</dvdauthor>

Next run the following commands to create the ISO for the DVD:

$ mkdir dvd
$ dvdauthor -o dvd -x video.xml
$ mkisofs -dvd-video -o dvdimage.iso dvd/

Next check and note the device we used for burning:

$ wodim --devices
wodim: Overview of accessible drives (1 found) :
-------------------------------------------------------------------------
0 dev='/dev/sg1' rwrw-- : 'HL-DT-ST' 'DVDRAM GP65NB60'
-------------------------------------------------------------------------

Burn the ISO to DVD:

$ wodim -eject -tao speed=2 dev=/dev/sg1 -v -data dvdimage.iso

 

I’m going to try and diversify my blog a little bit more and focus on areas not so esoteric. Look for more smaller post coming soon.

 

 

 

 

Supply Chain Attacks

Supply Chain Attacks

The News

Supply chain attacks are extremely common but not always quick to be detected. This morning I saw an article about the AUR (Arch User Repository), a community-driven repository created and managed by Arch Linux users, hosting malicious software. The software in question was an orphaned PDF Viewer called “acroread”. This software would collect mess with systemd, collect system information and exfiltrate that data [1]. For more information on that please read the hacker news article referenced.

 

The Supply Chain Problem

A supply chain is actually a complex and dynamic supply and demand network.
A supply chain is a system of organizations, people, activities, information, and resources involved in moving a product or service from supplier to customer. [5] A supply chain attack when when you exploit part of the process to change the end result.

I am guessing it’s because of the Gentoo [2] and Docker hub [3] attacks that everyone is looking at their community packages and we will see more of these reports coming out. With bitjacking there is a monetary incentive to hack a “trusted”, but in reality community, supply chain. This is the dark side of the open code movement.

Supply chain management and transparency is hard. Double that if you are a community supported entity that doesn’t make money or give guarantees. Do you spend time fixing bugs, adding features, reviewing pull requests, documenting old features better, document new features, demand/community generation, and so many more actions. When there is not a level of control but an implied trust… you are asking for problems. Ask yourself, would you run a binary from a random guy on the internet? If so, look inward.

I want to point that that this was discovered in the AUR (Arch User Repository), a community-driven repository created and managed by Arch Linux users, not official repositories like Arch Build System (ABS).

We think about supply chain management a lot at Red Hat. I would talk about this often when I was a Solution Architect. Red Hat, Fedora, Suse and Ubuntu deliver each have repositories that are controlled and backed by companies. This means they have control to the kingdom and you need to follow a specific process to get added. The AUR appears to have something similar but since it’s community based, they found a weakness in an orphaned package and use that as a jumping on point.

Don’t run some code a guy on the internet wrote… Get it from a trusted source. I’m looking at you snaps and juju… Unfortunately, scammers and state actors understand you want things ‘easy’. There is a contention between ease of use and security. Moving fast and moving safely. Over and over again we see systems get abused. Be mindful of your sources and hopefully Arch and Gentoo doesn’t take to much of a reputation hit from this, but hopefully they are more vigilant in the future.

 

Sources

[1] https://thehackernews.com/2018/07/arch-linux-aur-malware.html?m=1

[2] https://thehackernews.com/2018/06/gentoo-linux-github.html

[3] https://www.bleepingcomputer.com/news/security/17-backdoored-docker-images-removed-from-docker-hub/

[4] https://wiki.archlinux.org/index.php/Arch_User_Repository

[5] https://en.wikipedia.org/wiki/Supply_chain

OpenStack Use Cases

OpenStack Use Cases

Prior to moving into the role of Technical Marketing Manager for Red Hat’s Partner Solutions Initiative, I was responsible for selling the Red Hat Cloud Portfolio for Mid Market North America. I was the sole Architect covering the entire country.  I was the first in the company with this title covering this market segment. People thought I was crazy and it was crazy that we would put focus into this market but I over achieved what was set in front of me.

Pubic cloud is just so easy and cheap, how could anyone selling “cloud” survive in this environment? Public cloud is as simple as swiping a credit card and you have a VM, but there are some use case that public cloud providers can’t solve. Below I tried to remove the industry focus terminology, because OpenStack is more than Telco and HPC. This is  how I would pitch the advantage of Private Cloud to a CxO.

According to the latest figures from IDC, outlined by Network World [1], server shipments increased 20.7% YOY in quarter 1 of 2018. Revenue rose 38.6%. Let’s see if these trends continue. That’s an amazing growth rate for such a large industry.

The last thing I want to touch on from a personal note; I don’t think the kubernetes on OpenStack focus the OpenStack Foundation has held is as helpful or profitable for them as a whole. Whenever I see them demoed together kubernetes is doing all the interesting bits and OpenStack is just the pipes carrying the water. I can perform the same kubernetes demo swapping out OpenStack for public cloud or virtualization for that matter. Kubernetes could be a larger part of the demo and discussion of it’s aaS (as a Service) projects it uses, but Kubernetes steals the limelight. My $0.02.


USE CASES:

Self-Managed CapEx Cloud

I have frequently run into customers that have business process problems with OpEx expenditures. Employee salaries are out of OpEx budget. If they go with AWS, they may literally be putting people out of a job or losing additional head count. I’ve encountered a customer that based their customer payment rates based on OpEx expenditures. This meant that they could spend $10,000 in CapEx and not bat an eye, but if they incurred $100 more of OpEx a month, that would ultimately make their the bills for their customers larger.

A typical customer buys cloud services by the drink, OpenStack along with a managed hosting partner or OEM allows that same customer to buy it by the pitcher. Make the cost a predictable CapEx expense. You choose how utilized you want your environment to be and have a defined cost for adding new resources to the environment. Help CIO and CFO sleep better at night knowing what the cost will be and not tossing and turning because their AWS bill looks like a phone number.

My opinion is based on the sheer number of companies doing managed OpenStack and Managed Red Hat OpenStack. Rackspace, IBM Bluemix, and Cisco Metacloud are hosting Red Hat OpenStack as a service. Dell, HPE, Super Micro, Cisco, NEC and Lenovo all have reference architectures for Red Hat OpenStack and may of them will do the racking, installing and configuring of Red Hat OpenStack before wheeling into your data center.

API Integrated Infrastructure

Customers are using external applications to provision both infrastructure and application, so APIs become the integration points. Common examples of this is using a IT service manager, a CI pipeline, or an orchestration tool as part of the application pipeline. A typical consumer of cloud services wants to provision compute, networking and storage all using an API, allowing them to use a familiar development language and tools. You can choose Ansible, heat templates, puppet modules or a number of other tools but you must choose something to scale your Development and Operations team. OpenStack offers an API-first approach to all the components, which virtualization like RHV or VMWare doesn’t. RHV offers only python bindings that don’t seem complete. For example, I don’t know if you could provision a new lun on your NetApp connected to RHV, via the API.

Converged Datacenter

This comes back to everything being API driven. The customer can use OpenStack GUI, CLI or Restful API to provision resources. We also allow for Hyper-convergence of Compute and Storage on the same box. Cinder and Swift have connectors into so many different storage systems it becomes really convenient to manage resources in multiple ways and multiple boxes, but though a common tool. You don’t need to rely on expensive SANs and proprietary storage systems.

IaaS provider / Managed Service Provider (MSP)

If we have a customer managing infrastructure for their customers, OpenStack is a good model for this, because of the integration capabilities. This was a popular use case several years ago and many start ups and smaller integrator thought they could compete with the likes of Amazon or Azure. *Cough* Rackspace *Cough* Mirantis *Cough*. I think this is possible with OpenStack, but I think the investment required to start that business is cost prohibitive, who wants to spend Billions of dollars a year to find out? Didn’t think so.

Thinking on a smaller scale with more reasonable expectations, I think this use case has a lot of time left in them. I was working with a company sending satellites into space. In 2018 they intended to send 50 missions into space, each getting their own multi-purpose cloud computing platform to support any application driven workload the future throws at them. I was working with a company that is using OpenStack to deliver their own Software as a Service to internal and external customers. I have many more use case that are actually running production/workloads. This is what OpenStack was originally designed to do.

 

NFV as a Service

This is our sweet spot within Red Hat OpenStack. We have many telcos coming to us to do advanced NFV functions at bare metal speeds. However, I don’t think this is just a carrier use case. If you are running a hybrid cloud, having a platform that supports your specific NFV implementation in the public and private clouds opens the decision making process and fits well into a cloud exit plan.

Asynchronous Scaling

Since everything is API driven and with the release of OSP 10, we now have composeable roles. We can scale any service (like networking) without having to scale other resources (Identity). RHV-M is a “fat” application and can’t be broken out to scale independent parts.
I hope this helps give you some ideas and that OpenStack isn’t “dead”. I’m interested to hear what you think.

Resources

[1] https://www.networkworld.com/article/3278269/data-center/suddenly-the-server-market-is-hot-again.html

OpenStack Summit – 2018 – Vancouver

OpenStack Summit – 2018 – Vancouver

Summary

Attendance this year in Vancouver was way down, about 3,500, from Boston last year. The energy level of the keynotes, sessions and hallway were muted. The expo hall was also much smaller than I have ever seen in my past 3 years attending the NA Summits.

The general focus of the OpenStack Foundation is on inclusion and moving up the stack. They are seeing what developers and cloud native customers are talking about to show how it runs on OpenStack. It feels genuine they are trying to help, but with the attention being so much on Kubernetes, OpenStack capabilities kept getting upstaged.

I’ve been in the OpenStack orbit for about 6 years now, starting with Essex. I started as a consultant building OpenStack clouds for development. I’ve helped organizations move smaller private clouds to production. Most recently I was helping customers adopt OpenStack. I want to see OpenStack succeed. I would love to see more/any hybrid cloud solutions and integrations points with other environments. How great would it be for Horizon to connect into your AWS or Azure console? What if Horizon integrated with Kubernetes dashboard? Instead, OpenStack Foundation says, let’s talk about multiple container solutions…

The general shrinking of attendance I don’t see necessarily as a sign of the end. I think there are many factors that caused this to nose dive:

  1. Convention fatigue
    • May 2 – 4. Kubecon. Copenhagen, Denmark. Mass popularity in public and private cloud. International travel.
    • May 8 – 10. Red Hat Summit. San Francisco, CA. Broader Open Source and Hybrid Cloud Focus.
    • May 7 – 9. Microsoft Build. Seattle, WA. Broad industry topics and leading cloud provider.
    • May 9 – 17. Pycon. Cleveland, OH. Python being the primary language OpenStack is written in, and the largest Python event in North America.
    • May 8 – 10. Google I/O. Mountain View, CA.
    • May 21 – 24. OpenStack Summit. Vancouver, BC. The potential attendees and sponsors of this event in the month of May alone probably wanted to attend multiple of the above events. The overlap is huge. Which would you rather attend, have a booth or present?
  2. Twice a year. There comes a time when your project isn’t the most innovative and changing beast in the enterprise, this is a good thing. Does OpenStack still need to release two versions a year? Does it need to have two conventions in a year? I’m not one to say but I’d love to hear what you think…
  3. International Travel. I live in the United States and traveling to Canada is not much of a problem, but it can be for some. If your company is not world wide or have a presence in Canada I have found there is a LOT of hesitation about sending your employees to any other country. Each company is going to put a different weight to this, but it is part of the equation. Also, Canada requires US Citizens to have a Passport, and that’s surprisingly uncommon for people to have in IT. You can’t just go to Canada on a whim and the Passport process can take 6 – 8 weeks or more.
  4. Vancouver, again. This is another minor point but something they should consider. Should we go somewhere new or go to a classic spot? It seems like in NA they are hitting the old haunts. Maybe, you should go to places where it’s not “Just another convention” like Chicago or Detroit. With it being centrally located it allows people coming from all over to fly for 2 – 3 hours instead of 5 – 8 hours. I got to San Francisco and Las Vegas to many times a year for conventions. Let’s try somewhere different.

I want to see OpenStack continue to succeed. They’ve made some missteps in the past trying to do new things. Kubernetes is cashing the immutable application check OpenStack wrote 6 years ago. The product is maturing and the hype is drying. However, I hope I’ve made clear that I think it’s not at all dead. It’s in an important phase of it’s life. Middle aged. Accept it. A fancy new convertible and black hair dye won’t change the fact you’ve grown up. That’s a good thing.

 

Videos to watch

Business Focus

Is the public cloud really eating OpenStack’s lunch?

OpenStack for AWS Architects

 

 

Technology Focus

OpenStack upgrade strategy the fast forward upgrade

 

Root your OpenStack on a solid foundation of leaf spine architecture

Integrating keystone with large scale centralized authentication

Leveraging serverless functions in heat to deliver anything as a service-xaas

The illusion of infinite capacity part 2 – over subscription

Free software needs free tools

Workload optimized OpenStack made easy

Engineering container security

Intro to container security

OpenShift Serverless

OpenShift Serverless

At Red Hat Summit 2018, there was a lot of focus on Serverless of all sorts. I major new player, because S2I has existed as long as OpenShift, is Function as a Service on OpenShift. I’ve played with several , but we have settled on Apache OpenWhisk.

I can say most of my customers are on the public cloud, majority of them on AWS. Every one of my customers on public cloud are talking to me about Lambda. The key take away is that everyone is talking about, and not fully realizing the implications. If you commit into Lambda it’s committing deeply in AWS. The more functions you create the harder it is, even if the code is portable, to rebuild everything somewhere else. I’ve drawn an analogy that seems to resonate with me at least, “writing lambda functions is liking writing vendor specific hardware kernel integrations back in the 90s.”

The decision to change the platform becomes a much hard and longer choice, if you are not using Open platforms and products. That’s been the value of Open Source Platforms for 25 years.

 

Serverless on OpenShift Interactive Learning Platform
https://learn.openshift.com/serverless

 

Chris Wright at Red Hat Summit 2018: Emerging technology and innovation

 

See FaaS in action along with the power of your portfolio of Middleware Applications running on multiple cloud. Make your code and data portable, make it Red Hat.

Functions-as-a-Service with OpenWhisk and Red Hat OpenShift

 

Container Linux and RHEL: The road ahead

This is from the Red Hat Summit 2018 series.

I’m not the person to give you the best updated road map for RHEL and especially Container Linux. I’ve been focused in the multi-cloud Management and automation world for the past few years. A lot of developments have been afoot. Here are some amazing highlights with my commentary. I strongly suggest that you check it out the video below.

Immutable Foundation for OpenShift

The new Red Hat CoreOS (RHCOS) focus on several key areas. Red Hat Enterprise Linux and RHCOS are designed for different use cases and it’s important not to mix them. RHCOS will keep focused on one major thing… OpenShift/Kubernetes. RHCOS will be on the same cadence as Red Hat OpenShift Container Platform (RHOCP). This means when Kubenetes and OpenShift release, RHCOS will release. This allows engineering to keep the same speed and make changes necessary for new release of RHOSP. This also means that when we sunset specific version of RHOSP, it will also sunset the locked in version of RHCOS. Now we can keep the release coupled and supportable without having to drag RHEL before it’s ready.

There are so many more changes, but I don’t just want to copy slides. Hopefully you are intrigued enough to watch the video. Thanks for reading!

Red Hat Summit Recap

My Thoughts

I spent last week in San Francisco attending Red Hat Summit 2018. This is my third Red Hat Summit in a row. The experience is always a blur and a whirlwind of activity from beginning to end. I am going to try and put together a few post this week of things I thought significant to the direction of Red Hat and the industry.

I had an amazing time meeting with friends and colleagues. I spent quite a bit of time helping customers at the Public Cloud booth. The booth was suppose to help customers understand the portability of their subscriptions to the public cloud via the Cloud Access Program but most of the focus for customers was running OpenShift on the public cloud.

A recap of Red Hat Summit 2018. 

OpenShift with CockroachDB

OpenShift

Red Hat® OpenShift is a container application platform that brings Docker and Kubernetes to the enterprise. Regardless of your applications architecture, OpenShift lets you easily and quickly build, develop, and deploy in nearly any infrastructure, public or private. ” [1]

I’ve helped a lot of customers find their way to OpenShift. I’ve helped them develop and refine use case as well as figure out how it fits into their environment.

Cockroach DB

Other than having an unfortunate name I really like Cockroach Labs CockroachDB from the little I’ve used it. I am by far not an expert on CockroachDB. I first learned about it at OpenStack Summit 17 in Boston. Kudo to the guys for putting this presentation together and presenting it on the big stage.

What is CockroachDB?

“CockroachDB is a distributed SQL database built on a transactional and strongly-consistent key-value store. It scales horizontally; survives disk, machine, rack, and even datacenter failures with minimal latency disruption and no manual intervention; supports strongly-consistent ACID transactions; and provides a familiar SQL API for structuring, manipulating, and querying data.” [2]

There is a lot more to be said about the statement above that I am not going to cover here. To summarize, you can partition and replicate your database while still making sure queries are only getting the latest data, called “Strong Consistancy”.

In Closing

On a personal level I like the notion of OpenShift and CockroachDB together for a couple reasons. Chiefly, they are both Open Source. You can experiment locally with no up front cost on a developer macine and eventually roll it into production. When you are ready for production both projects have Enterprise support offerings. It’s not critical you invest time, money and resources into figuring out if it’s going to work for you just to find out 6-9 months down the line it’s not what you hoped it would be. Experiment now! My next post will give you the framework to start….

New year for Wizard

2018 is upon us and I’ve renewed my interest in posting to this blog. I’ve been doing a ton of learning in 2017 and I think 2018 is going to be the year I start spreading that knowledge.

I’m always pushing myself to look for new ground or enhancing new concepts that we are familiar with already. Look for more of this in coming weeks. I think I have things to say about data storage (Databases) in the kubernetes space.