Our man in Wisconsin, Dustin Hoye (FCPX Editor of The Next Bite), is back again making our lives a little easier with this video about the little understood, hidden away in a submenu, but really useful “reimport from camera archive” option:
One small thing to add to this tip – basically, if you ever find you have something going wrong with media not linking back up properly from clips imported from cameras that require a plugin in order to import (usually Canon or Sony), hitting “reimport from camera archive” and then finding the original media (make sure it’s still in the original card structure), and letting the import process happen fully is usually going to solve the problem.
In general, best practice while things are importing (especially if FCPX if making wrappers of clips in the background) is to not do anything that might make your system angry while this process is happening. I realize this is vague… but if for even 1 second the question of whether something you’re about to do while things are importing/waveforms are generating might make FCPX angry… don’t do it. Wait for your import to finish and your progress bar to get back to 100%, and then do the thing you were thinking about.
And if you do make FCPX angry and you have to force quit for some reason and then find your foootage now needs to be relinked… you now know what the “reimport from camera archive” button does and your blood pressure can go back to its normal level.
Was fortunate enough to spend some time with Phillip Hodgetts from Intelligent Assistance last week and we had a longer discussion about Lumberjack, his amazing tool for real time logging and pre editing in FCPX.
I had watched him demo it, and had even taken it out on a test drive with our How We Make Movies Podcast a couple weeks ago… and I had some miconceptions/issues on the first go round with it.
Anyway, there were a few things I noticed as I talked with him…
First, for some reason it had taken me forever to finally work up the nerve to try it out on a real project… it was like I was afraid of it somehow. The concept had seemed easy enough to me… but there was something about it I couldn’t wrap my head around… and I realized that while talking with him I was actually just over complicating everything… I was making the app and the process harder than it needed to be.
Second, things that are obvious to Phillip Hodgetts are not obvious to the rest of us. After him answering two of my questions, literally everything else about the app made sense.
Third, I had some misconceptions about the app (for instance, I thought that wireless was required to do logging), that simply aren’t true.
The Content Creation Date Sync thing is ridiculously easy – Forget timecode and slates, etc. if you’re using lumberjack. Just tell your cam ops to set the clock on their camera to the same time as the Lumberjack App and you’re good to go. If you want a really good insurance policy, you should have all your camera ops make their first shot of the day be an image of the Lumberjack logger screen (where the exact time is listed). You should have nothing else to worry about after that.
Sync your multicam clips in FCPX first and then send to Lumberjack – I asked him whether you could have the metadata applied to multicam clips… the answer is yes… and what you should do is bring your footage into FCPX first, make the multicam clip, and send out the XML containing the multicam clip into the Lumberyard App, and it will apply your logging info to the multicam clip.
The Lumberjack Logger (Web) is different than the Lumberyard (OSX App) – While you’ll be doing all of your logging through the web/IOS App, you’re going to still need the OSX app to do the XML interchange.
You can still log if something happens to your wireless – Basically, if you’re experiencing problems with slow wireless (like I did on my first shoot), if you create your event ahead of time, you can still log through the IOS App, get done what you need to get done, and hook up to the Lumberjack Server when your wireless situation improves. You are not a prisoner of your wireless network while using Lumberjack. Just use the IOS App. This was a big misconception for me.
You can use Lumberjack after you’re done shooting or with old footage – The IA guys just but into beta their new Backlogger App where you can log footage you’ve already shot (or catalog video masters you’ve finished for things like Promo departments).
Anyway, for all the info you need about getting going with Lumberjack, check out the newly published Quick Start Guide… and if you have any problems, Phillip and Greg are awesome with support.
If you haven’t checked out Lumberjack yet, and you’re doing a lot of Doc/reality/non-scripted work… it’s probably going to become your new best friend.
There’s no reason to be afraid of it. It’s easier to learn than you think.
Imagine you’re a company that has two products in the same product category. One is the beginner product designed as an introductory tool that is also powerful enough to be used by most hobbyists and is perfectly fine for the average person. It’s easy to learn, simple to use, free, and widely used by millions.
Your other product is designed for professionals. It has a completely different methodology from your beginner tool and requires people graduating from your introductory tool to completely relearn everything they had learned from that tool in order to use this new, “professional” tool.
Most people would read the above and conclude this was a stupid business strategy. There’s no good reason that your two products in the same product category should be so different. Not only that, but In terms of the bigger picture, a smart company would design their long term plans around the product that has the most users, and build from the product that is accessible to the largest number of people.
In case you haven’t figured it out… I’m talking about what Apple did with iMovie and Final Cut Pro. iMovie has WAY more users that Final Cut Pro 7 ever did, and the potential for far more long term growth. It was the obvious platform to build on top of.
Essentially, the “pro” editors wanted Apple to have their students spend grades K-12 learning everything in English, and then when those students went to “college”, all of their courses would be taught in French.
Most outsiders would think that was a really dumb idea. However, the professional editing community loves to throw around the iMovie Pro insult like it’s actually, you know, insulting. If you want to see how contentious it can get, check out some of the comments here and here.
Your introductory tool should be something that paves the way for users to graduate to the more advanced tool. They should not need to relearn everything they had already learned in order to become “professional”.
The truth is that some of these “ Pro” editors simply do not see the bigger picture. When they call FCPX iMovie Pro, what they don’t realize is that they’re actually complimenting Apple for having competent business strategy and common sense.
My guess is that most of these “Pro” editors will understand what Apple had in mind when all of the iMovie kids start showing up at post houses and start asking the “professionals” why they can’t do all the things on their “professional” software that they’re doing on their home computers.
For many people who have been around for awhile, it will be eerily similar to what happened when the original Final Cut Pro became popular and an entire industry was caught off guard.
What’s really ironic is that the people who are complaining the loudest about FCPX are the Final Cut Pro 7 editors. I kind of feel like they should know better. They seem to not like the taste of their own medicine.
I find it all a bit hypocritical. Things change, and tools change… but in terms of the changing of the guard… the more things change, the more they stay the same.
It’s becoming increasingly obvious to me that we’re entering a “do-it-all-in-one-app” world for most things.
Bouncing out to After Effects, Motion, or having a dedicated GFX person who “handles all that stuff” for many clients is becoming far less common when it comes to how Producers budget for jobs.
Turnaround times for videos are faster, and editor’s with “Jack of All Trades” skill sets are becoming almost mandatory (please don’t get upset with me about that… I’m just the messenger).
Clients asking “Can’t you just do it yourself?” is becoming the norm.
And even if they’re not asking that, being able to turn around GFX quickly and having them still look professional is a HUGE value add you can provide for your clients as an editor.
The main problem is that most people don’t have time to learn everything… and when it comes to things like Motion GFX, for the most part, especially for corporate/branded work where you’re expected to do everything yourself, having some great looking templates/design elements in your arsenal becomes the difference between a profitable vs. too-time-consuming-to-be-worth-it job.
I’ve actually done a video about Ripple Callouts in its previous incarnation, but it just got a great new update (free), and the video on the link above will tell you more about what can be done with it than I can. In terms of creating quick, professional looking “callouts” for things going on within a frame, there simply isn’t a better plugin package on the market.
If you need something that’s going to add a bunch of style to your typical freeze frame, you’re going to want to look at Snapshots… which is a series of Freeze frame templates (transitions for these are included as well). These plugins are a great way to impress a client or create a package around their branding with minimal effort… or at least far less effort than everyone else is putting in.
And when you these in conjunction with something like Cineflare’s Kinetic Badges, which is a series of animated and highly customizable vector graphics, chart type things, and textures, corporate and branded work becomes are far simpler, less tedious, and considerably better looking.
Quick note about all of these packages, and really any package you work with in FX Factory… don’t watch the “demo videos” to learn how to use the plugins. Watch the “(plugin package) in Final Cut Pro X” video that is next to the “demo video” on the product’s info page to get a more in depth understanding of what to do with the plugins.
All of these packages are available through FxFactory and are only available for use in FCPX.
Review disclaimer – Yes, we do sometimes get free products and licenses. No, this does not affect our reviews. We only advocate and sell the products that we use in our own workflows. If we bother to review something, it’s because we use it in our day to day and like it. We also very much admit that we haven’t seen everything… if you think there’s a product out there that we should be talking about, please let us know at workflow@fcpworks.com.
To be perfectly honest, I was a bit scared of this one when I first got it. It took me awhile to open it up and really dig into it… and even longer before I figured out what I’d actually want to do with it.
Honestly, I’m literally not capable of building one of the standard templates in Nodes on my own… so it’s nice to be able to start from something you could never even begin to understand how to create and then quickly start building on top of it.
Like most things… once you make the decision you’re going to dive in with it no matter what, you start figuring things out, and once I decided I was going to take the time to figure out how Nodes worked, I actually picked it up pretty easily.
Also, the templates were surprisingly responsive on both my Macbook and Mac Pro. I was able to move sliders around and see things update extremely quickly, which I have to admit was a surprise. With the level of complexity in these plugins, I was expecting to see lots of beach balls when I messed with stuff. This has not been the case with Nodes, for the most part.
For me, Nodes is most useful as a place to begin your larger template or as a quick way to throw on a complicated design element on an existing comp you’re doing. Basically, you get a sense of something you want to do, and instead of reaching into the Motion Particle Emitters tab (another underutilized resource that is a bit hard to wrap your head around), grab a Nodes template and get going. You’ll likely end up with something cool a whole lot faster than you would building something from scratch.
Yanobox Nodes
They’re great also as just a ready made cool looking particles element you can easily throw a blend mode on (I typically soft light or Overlay for this) to give a texture some life. You can use them in a very similar way to how you’d use a Light Leak… and the two complement each other nicely if you need something like that.
So often, you get plugins that just replicate stuff you could probably do yourself…. or feel very “stock” or hard to customize. Not Yanobox Nodes.
Things are so ridiculously customizable, it’s hard to go into too much detail. I think Plugins like this are the future, I think, because they allow you to start from a place you’d have literally no idea how to get to on your own. Like Coremelt’s Slice-X, It allows advanced effects techniques to become a lot more approachable to the average editor… and I think that’s something all Plugin makers should aspire to doing.
I’ve never seen another plugin package like Nodes 2. It’s definitely not cheap ($299), but you get what you pay for… and then some.
For more information on how to get started with it, go here:
If you have a need to quickly step up your Motion Graphics game, this plugin should probably be at the top of your list.
Review disclaimer – Yes, we do sometimes get free products and licenses. No, this does not affect our reviews. We only advocate and sell the products that we use in our own workflows. If we bother to review something, it’s because we use it in our day to day and like it. We also very much admit that we haven’t seen everything… if you think there’s a product out there that we should be talking about, please let us know at workflow@fcpworks.com.
Sam here… nothing I write here is going to compare to what Peter Wiggins did over on fcp.co. Go grab a cup of coffee and read this for the next 25 minutes:
What Peter had going over there is exactly the type of setup we’ve been advocating at FCPWORKS and not radically different than what we had going at our FCPWORKS launch event.
If his case study doesn’t blow the doors off the myth of FCPX being an unprofessional product, I really don’t know what it’s going to take.
I have a feeling we’re going to see a ton more stories like this coming. I’m looking forward to the day where this sort of thing isn’t raising eyebrows anymore. I feel like it shouldn’t be.
Sam here… finishing up 4K week on FCPWORKS Workflow Central with one more post.
So… BMD or AJA… the eternal debate. Right now, we’re centering this around the 4K monitoring & I/O products, AJA’s IO 4K or the BMD Ultrastudio 4K. Basically, it comes down to this… Do you use DaVinci Resolve for Color Correction? If the answer is yes, you’re going to need to go with Blackmagic. Case closed. Blackmagic Devices are the only ones that will work with Resolve.
However, If the answer to that question is no, and you’re doing your color work primarily in FCPX or another program that isn’t Resolve (Scratch, Baselight, Smoke to name a few) the discussion becomes a lot more complicated. Additionally, in the case of the BMD Ultrastudio 4K… it’s can be loud. The AJA IO 4K is quiet and considerably smaller. If you’re keeping the product in a room with a new Mac Pro as your primary computer, you really start to hear the Ultrastudio 4K when it’s on…. and if you’re doing serious sound mixing, the noise makes a big difference.
Additionally, and this is a little known fact, but the AJA cards support more monitoring formats for FCPX as well. For whatever reason, your monitoring formats in the Blackmagic preferences (system settings on the Mac) are more limited and considerably smaller than when you’re using your Ultrastudio 4K in Resolve.
HOWEVER, at the end of the day, price is also a factor and the AJA products are almost universally more expensive than their BMD counterparts. So price vs. performance is definitely a consideration. In my opinion, if you’re more interested in specific features, go AJA. If you’re budget conscious or a heavy Resolve user, go BMD.
p.s – little known fact, but the HDMI out on the back for the New Mac Pro can be used as an 8-bit A/V out in FCPX, and it is FAR more configurable for video I/O if you’re using weird sequence settings and just need to send out a 1:1 output over HDMI than what’s available through your BMD or AJA device.
Basically, the long and short is that the DP is worried that higher resolutions and the ability to make alterations to images further down the line in post is going to take control away from DP’s over their images.
On the one hand, I can totally understand where he’s coming from, and he’s totally right. I’ve seen quite a few projects butchered in color correction, and I imagine it must be very difficult to go out and put your heart and soul into shooting/lighting something only to have it completely reworked in a way that’s entirely not what was imagined… and then be credited as if that was how you wanted it. That sucks.
However, this is not the fault of the resolution, RAW, or improvements in technology. The fault lies with the way that departments work together, and it’s my biggest pet peeve in the entire industry.
No one talks to each other.
Departments don’t talk about workflow before the shoot starts. Production rarely asks what post wants. Post rarely checks in with the DP or sound department after the shoot is over. VFX lives on its own island and is expected to push the “make it better” button on whatever production hands them. Everyone is just trying to get through the day, and get through the gig.
There’s no process and no blueprint. There’s no workflow.
Actually… that’s not even really true. There’s too many workflows, and every department/individual has their own specific way they think things should be done/delivered to them. Rarely do these different workflows sync up across departments. Even rarer than that does one department ask the other department how they want to do things before the production starts. Usually there’s a list of delivery requirements on how a vendor wants things that is discovered after the critical production decisions have been made.
A few examples that illustrate this:
– An anamorphic lens is chosen because Production likes the widescreen look. Post is never consulted. However, no one in post knows how to transcode/desqueeze the anamorphic footage correctly. Footage is processed slightly warped and then edited this way. Conform becomes a nightmare. Also, turns out the distributor needs a full frame 1080 master (no bars on it), however the movie wasn’t framed in many cases to live in a 16:9 master. Massive pan and scan work needs to be done. Post budgets go up.
– LUT’s are created for each shot but no discussion has been had over how these will be applied to the RAW footage when it’s time to do the conform. No one bothered to run this workflow past the editor or colorist who have no how any of this was handled, and the production had a falling out with the DIT who made all of the looks. In the end, LUTs are applied incorrectly or not at all and no one has any idea what LUT goes with what shot or how to sync all of these LUTs up in a way that isn’t ridiculously time consuming. Post budgets go up.
– RED footage will be transcoded down to prores at a random resolution with letterboxing added on to the prores. No one in post has any idea how to correctly get back to the original R3D’s with proper transforms from the edit applied to the RAW footage. Post budgets go up.
– No one asks the VFX department how they want their greenscreen shots done. Tracking marks are not used and yet the camera is moving during the shot. Posts budgets go up.
– VFX works in RED Color3 and delivers DPX plates. The colorist is using REDlogfilm and grading everything from the RAW. Things don’t match shot to shot. Post budgets go up.
– Editors need to deliver their picture to a Sound house. They’ve never delivered to this sound house before, and the Producer picked them because they had the cheapest bid. Lots of ADR work is expected. Post budgets are about to go up.
Anyway, you take things like the above and then throw in the fact that, in most cases, especially on smaller commercial jobs, most of the people involved are working with each other for the first time. Chemistry and trust are non-existent. Bids have gone out to the lowest bidder and not to the most qualified teams. CYA (Cover-Your-A$$) attitude becomes prevalent. Fingers become ready to be pointed. Accountability becomes nonexistent. People get angry. People get fired.
Someday, I want to live in a world where the DP knows the editor and both of them know the colorist. They’ve all worked with the director before. Also, before they’ve shot, each of these people sat down in a meeting with the VFX and sound departments and talked through how the imaging pipeline was going to go from set to edit to VFX to sound to color to mastering. Then, someone would come up with a diagram based on what cameras were being used, how sound was being recorded, what resolution needed to be delivered, and in what color space(s). Then, they’d also write down how metadata would be managed, VFX would be roundtripped, sound would be turned over for the mix, video would be conformed for color, and how, in general, the project would be set up and delivered to the distributor based on pre-agreed upon sound, color, and mastering specs. The departments would then take this diagram home, decide what needed to be changed based on their needs, and then come back and finalize their process, compromising where necessary for the greater good of the project.
This would all be done before a single frame of footage was shot.
A man can dream.
Anyway, until people start working this way and figuring out their process ahead of time, people will continue to write blogs like the one I linked to above and blame things like resolution and RAW for why their footage doesn’t look right in the end.
Departments need to communicate about workflow more. That’s not technology’s fault.
There’s a lot of talk and a whole lot of hype when it comes to 4K. I’m certainly guilty of a lot of that hype. However, most people know very little about 4K and are pretty intimidated by the subject. Here’s some quick hits when it comes to working with it.
First thing you need to know is that there are two flavors of 4K delivery resolutions:
4K UHD – This is the spec for 4K for broadcast and in the home. Resolution is 3840×2160. It’s a 16:9 aspect ratio (1.78:1), and is really just double the resolution of standard HD (1920×1080). Most 4K displays and televisions will be 4K UHD.
4K DCI – This is the cinema spec, and resolution is 4096×2160. Aspect ratio is 1.85:1. Like 4K HD, this is just double the resolution of the standard 2k spec (2048 x 1080). You’ll only really see the 4K DCI spec in play if you’re watching a movie in a theater.
From a traditional viewing distance, 4K only really becomes noticeable once the screen hits 84 inches. However, once you hit that size, and if shot and projected properly, the results are pretty stunning.
As of this writing, most of the 4K TV’s being sold are not worth buying. Either the panels are really cheap and the image quality is not great, or the price is just not worth it. If you need something that can monitor at 4K, and you’re working at a budget, get one of the cheaper panels, and pay very little attention to the color and contrast of the monitor. Just watch for sharpness and resolution factors. In many ways, what’s happening now is like what happened when HD first appeared. Sets were really expensive and only for high end pros or people with money to blow. Wait a while and you’ll start to see more affordable options appear.
Lastly, you don’t need to monitor 4K while you’re doing color correction. I’d recommend using an HD Broadcast monitor while doing color (with your video I/O set to 1080). Buying an affordable 4K grading monitor is pretty much impossible and won’t make any difference to your color decisions. Color correcting your scaled down 4K images at 1080 is still the way to go. Right now, I think the only useful thing a 4K monitor is really capable of is to check the overall sharpness of your image at a 4K resolution when you’re mastering. Everything else is not ready for prime time yet… at least in my opinion.
Sam here… So… Believe it or not, it’s actually easier for the average person, if they had access to the right projector, to put on a higher resolution screening than they typically see when they go out to the theater.
When you go watch a movie at the typical multiplex, you’re almost universally watching a movie that was made from a 2K master… even if the projector is 4K, the movie itself was up-rezzed from a 2K file to fill the screen.
The main reason for this is that Hollywood hasn’t really figured out the whole 4K pipeline thing… especially on the VFX side. It’s far simpler and more practical for them to finish in 2K.
What this means is that if you have a Dragon, Epic, GH4, or 4K BMCC, It’s a pretty straightforward process for you to shoot, finish, and screen at a much higher level than the big guys do… especially if your VFX pipeline is simple.
In fact, if you somehow managed to have access to a nice 4K Projector with an HDMI port on it, you can put on a higher quality screening in your living room than you’ll currently see in the multiplex.
Why? Well, Both the Mac Pro and Macbook Pro will shoot out a 4K signal through their HDMI ports.
Also, those 4K HDMI ports will also send a 5.1 signal.
A 4K 5.1 screening is now a pretty straightforward process if you’ve got the right home theater and you know how to plug in an HDMI cable and export a 4K ProRes.
It’s now easy to Shoot 4K, post 4K, and then screen it right from your laptop.
I have no idea why film festivals make things so hard for Filmmakers with their DCP, Bluray, or Tape requirements.
Filmmakers should be able to just hand over/dropbox a QuickTime movie and get on with their lives. For some reason, everyone loves to make things complicated.
With my Film Collective, We Make Movies We Make Movies, we do our annual WMM Fest of our communities’ work in LA, and and we run all of the screenings (there were 5 this year) right from my laptop. In fact, every screening we’ve ever done has been done through QuickTime,in 1080 ProRes, using our filmmakers’ QuickTime master files, and playing a from a laptop through QuickTime or Final Cut. It’s just easier.
The only reason we’re not doing 4K screenings is because most filmmakers are still mastering at 1080, and 4K projectors are still way too expensive. Both of these things will be changing in the not too distant future.
If we had the right files and the right gear, though, our process would still not change at all. ProRes is still ProRes, and we’re still just playing it out of an HDMI port to a projector.
Our screenings look better, sound better, and we have almost no room for technical issues because we do things this way. We work from the masters, and leave as few things to chance as humanly possible. As long as the projector is calibrated, we’re good to go.
And while I explained that it’s a lot easier for filmmakers to make DCP’s these days in our blog here… it’s still a very difficult format for the average person to implement on their own and is far from a user friendly experience to screen and play one of those things for an audience.
Both the DCP and Bluray formats were designed from day 1 to be difficult to create and hard to pirate. Essentially, as most high end technologies typically are, they were designed to both keep people from understanding them, keep them proprietary, and to maintain established business models… in this case preserving the studio multiplex and home digital distribution businesses.
Fortunately, there’s a pretty easy way around all of this nonsense… which is good news for the independent filmmaker who isn’t tethered to this process and can figure out how to make and distribute their own content.
Right now, I look at DCP’s as a necessary evil, but the truth of the matter is that the safest and easiest way to screen a movie for an audience is to just run it through the HDMI out of your Mac from your QuickTime master.
Why do people feel the need to make things so hard?