1,500 Words. Plan about 6 minute(s) to read this.
I’m used to writing and to podcasting. I know what the content creation and publication process looks like for written and audio media. The increasing popularity of video has had me and my business partner scratching our heads, wondering how we can best leverage the medium. Or if we even should.
And so, we’ve begun our video adventure the way we’ve always done things. Just go for it. Try it. Hit publish. It won’t be perfect, but that’s okay. Learn and improve.
My first video was a good bit of work, taking roughly eight hours to write, shoot, produce, and publish a ten minute video covering some tech industry news. That’s not scalable, but it was a learning experience. Here was my process.
I get press releases from dozens of marketers and public relations firms, usually several per day. I chose some that I thought folks might be interested in. And then I wrote copy. I know from past projects that many written words translate to many spoken minutes. You have to keep copy tight if you’re writing to a time limit.
I managed to do that, writing just under a thousand words of copy. I did ad lib a bit, but overall, I didn’t stray far from the copy. In fact, you can watch the video and track the words here if you want to see just how close I kept it.
There’s a point of reference for you. A thousand words of copy plus a bit of ad-lib resulted in ten minutes of video.
I shot with a green screen background I’ve rigged up in my office. It’s not great, but it is good enough. In the actual shoot, the screen was hanging with no tension. I’m adding clips to give the screen a stretch so that there will be a flatter result that will light more evenly. I need more clips. If you see the right top clip, you see the wrinkle formed. More clips will help.
The point of the green screen is to allow me to insert whatever background I want to in its place. This is easily accomplished with Final Cut Pro X, my video editing tool.
I shot in 4K at 30fps using an iPhone 6S+. I’m only going to publish in 1080p, but shooting in 4K means I can crop, use the highest res graphics possible, etc. and minimize loss of image quality when rendering to 1080p.
I use the same principle when recording audio. I usually record podcasts at 48kHz/24-bit mono for what will ultimately be a 64Kbps mono MP3 when distributed – more bits to work with in editing means plug-ins have more zeros and ones to act on, and presumably makes for a better end result.
I don’t have a good lighting solution yet. For this shoot, I lit my face with a diffused LED panel lamp with a mix of cold and warm LEDs. The light was mounted straight ahead of me. The nature of my office means that I also have a strong side light coming from the south-facing window during the day. In the video, this ended up casting a shadow on the left side of the video behind my head. It looked a little strange. You can see the side-lighting in the green screen shot above as well.
In any case, I need more lighting in the right places to fill shadow behind me. My office is small, so I’m looking into how I can get this done without filling what little floor space I have with box lights, etc. But, box lights might be where I end up anyway.
Another issue in the video is that I’m looking off-camera to read copy. That leaves the video feeling disconnected. However, there are many teleprompter solutions available. Teleprompters like the ones I’m researching use beamsplitter glass. This special glass acts as a mirror for the teleprompter text, while at the same time allowing the camera to shoot you, but not see the text.
Thus, with the right teleprompter, I can read my copy while looking straight into the camera. I’ve done some video work in the past for a large media company using a teleprompter. I know it would work well for me.
I produced the video with Apple’s Final Cut Pro X running on loaded iMac Retina 5K model with 32GB of RAM and an Intel Core i7 running at 4Ghz. Sounds like a beast of a machine, eh? Sigh. Not so much. I wish I had more cores, or maybe a Mac Pro. Video rendering (the part you do when you’re done editing the video) takes a long time.
I won’t go into the specifics of FCPX here. If you care about that, go to YouTube and search. The sheer volume of FCPX instructional videos borders on profligate. I will summarize the tools I used, however.
- Titles for lower thirds, plus a date in the upper left hand corner.
- Several transforms to move my headshot off-center, to size and place graphics, etc.
- Video animation with compositing opacity so that graphics would fade in and out instead of suddenly appearing and disappearing.
- Chroma keying to make the green screen disappear.
- Secondary audio track inserted, with primary audio track muted. I used the audio from the lapel mic you see in the shot instead of the audio captured by the iPhone.
Another thing I didn’t do that I wish I had done was use a visual flag to signal each segment. That meant I had to go through the entire video carefully to insert the graphics and lower thirds in the right spot.
This was my first project using a Contour ShuttleXpress, a USB rotary dial that makes getting to just the right spot in the video much easier. I use it with my left hand and a trackpad with my right.
Much of my time spent in editing the video was in simply figuring out how to get around in FCPX. For example, if you’ve never done chroma keying, you have watch a video that explains it to you. It’s not hard, but you won’t figure it out just by clicking around if you’re a video editing n00b.
I found this to be a pattern with every FCPX tool — the first time out will take a while. For instance, using transforms drove me a little nuts, because I couldn’t grok how to get the handles to appear consistently on the object I was manipulating. Then I figured out to click on the Transform tool itself when the handles weren’t showing up, and I stopped losing minutes fumbling around in confusion.
The last thing I did when done stumbling and fumbling with FCPX was to add a brief top and tail. Both were the same video clip — a pre-rendered video my business partner made with Apple Motion.
Final rendering takes an enormous amount of time. Every added effect, every title, every graphic, etc. all has to be turned into video frames. FCPX renders in the background constantly with spare CPU cycles, but even so, the final render took dozens of minutes with my iMac cooling fans whirring away.
First time out, I rendered from FCPX directly into YouTube. Once FCPX is authorized to use your account, you can set YouTube as a sharing target.
I learned a couple of important things about YouTube.
- YouTube is going to render in its own way what you upload. This takes a while. You aren’t simply “uploading a video to YouTube.” The process is more involved.
- While YouTube is working on your video, the video will only be available at 360p. This is a brief, temporary situation.
The 360p issue was a surprise. I reacted by deleting what I thought were 360p renders, assuming I’d done something wrong that resulted in 360p, and not 1080p. But, the only mistake I made was not waiting long enough. After just a few minutes, the video was available in a variety of resolutions up to 1080p.
However, since I didn’t know about this “360p at first” issue, I deleted my first video. Then I re-rendered the video locally at 1080p, watched it to be sure it was what I expected, and then uploaded that to YouTube, only to have the same 360p result. I executed some google-fu, discovered my blunder, waited, and then the glory of 1080p washed over me.
The next time…
- I need to sort out a teleprompter. I have a plan.
- I need to improve lighting. I have a plan here as well.
- I will flag the end of segments with a piece of colored construction paper, then edit those bits out.
- Video editing & publication will go much faster. I learned a lot during the initial round of n00bery.
Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks