Hello! I’m wrapping up this (very) brief LEMONADE series by talking about home movies! This is a little bit about aspect ratios but maybe a little bit more about preservation issues with home movies. If you haven’t, its worth catching up in Pt. 1 right here, because this picks up where that aspect ratio context left off.

The first time we see the 4:3 ratio appear, it’s a cropped image during “Don’t Hurt Yourself,” sized down from a wider shot. But shortly after, we also get these close shots of women’s faces during the Malcolm X quote. Bonus facts: this footage was also filmed in 4:3, but on film, and this clip was converted to a magnetic tape format (see the head clog squiggles at the bottom?) and then digitized  —  an example of three format errors all in one!

So all three of these are the same aspect ratio, but why do we feel differently about them? There’s some added “film grain” in these shots to intentionally give them a home-movies look, unlike the high-contrast black-and-white images of women dancing in a parking garage, which look clean, and constrained only in the context of being interspersed among the similar imagery at a wider ratio. The grain and warmed tones give it an intentional “old artifact” look (think “Instagram filter”). This filtered look comes up again during “All Night.”

But some of the clips of couples during “All Night” have an aspect ratio of 1.77:1, which fills the entire screen.

It’s likely these were shot with the same camera using the same aspect ratio, but with effects and filters added later, as well as the cropping down to 4:3. (Cropping issues aside, I’m not entirely convinced that this isn’t originally filmed on film. The soft darkened edges and occasional errors look pretty legit to me.)

“Daddy Lessons” includes footage of horse-riding and New Orleans family life, given a similar “home movie feel” treatment. Pretty much all of the footage during this song, excluding footage of Beyoncé, are set in this 4:3 format.

But as a bonus within this song, there’s a real home movie of lil Bey, probably recorded on good old-fashioned VHS! Some notes of aging include image ghosting, head clog, contrast too high, and dropout — many outlined in a previous blog post about Formation specifically. But this is a good example of the real concerns for the fragility of magnetic media.

Following Beyoncé’s home movie (Beyoncé and dad) is a clip from Blue Ivy’s home movies (Beyoncé and grandpa), which moves from mid-1980s Standard Definition (and 4:3) to mid-2010s High Definition (and 16:9).

We also get to see some pre-Blue home movies from Beyoncé’s extensive archive during “All Night.”

This is a clip from a video of Beyoncé and Jay-Z celebrating wedding vows with IV tattoos. Blurry, low resolution, standard 4:3. No shade, this was probably either filmed with a MiniDV camera or using a cameraphone (my bet is on cameraphone, Peter thinks MiniDV). Just like many of the following home movies, which are fragile in their own way. Let’s talk about how!

This above clip has overly blown-out white levels. Similar to the problems with VHS (and other forms of magnetic media), it’s hard to get the contrast right. What do you expect, though? It’s not professional-grade — these are images from a consumer-grade camera. That doesn’t matter when you are looking back through your files and trying to find some lemonade-making videos to play at Grandma Hattie’s 90th birthday.

Shout out to Grandma Hattie, though, who seems to be pretty chill with having a cameo on an album that strongly features her grandson Jay-Z being called out as cheating garbage monster. This was probably filmed in an HD aspect ratio (based on other clips from the same event in LEMONADE and lack of other errors more likely to be found in other formats) but cut to 4:3 to again have a “home movies feel.”

While talking home movies, what is with this error? No, seriously, what is this?

This is footage of Tina Knowles and her husband Richard Lawson, taken on their wedding day. Very sweet. But what is this white flash across the screen trying to ruin a happy moment? I have no idea.

In between being pregnant and Blue Ivy being born, the aspect ratio (at least from what they’ve included in LEMONADE) changes from the boxier standard-definition size to something wider and larger: 1.77:1 aspect ratio and HD.  

Phone-based cameras comply with commonly-used video aspect ratios but also common still-photography aspect ratios because the camera exists as a multi-tasker, even if the ratio settings change depending on the chosen camera setting.

The above home-movie ratio is the same as the professional-grade video taken below.

This is a high-quality, HD camera shooting at a 1.77:1 aspect ratio. Your home movies probably aren’t gonna look as good as this.

But home movies aren’t always looking good, even if you are a celebrity. What’s going on with Blue Ivy here? Everything is so blocky, the lines aren’t smooth, and the color is off. Blue Ivy was born into the world of lossy compression in digital video.

So baby Beyoncé videos worry about magnetic-media-problems and baby Blue Ivy videos have to deal with digital-compression-consumer-camera-problems. Blue Ivy videos probably go straight into the archive for safe-keeping, which saves them from the turmoil of having to be recovered from outdated devices using outdated software, or other obstacles dealing with obsoleted-but-still-proprietary software, like ancient versions of iTunes. If you created a tiny version of yourself and you are capturing all those tiny-you moments with your phone, this should scare you!

It’s important to think about these problems in relation to our own home movies and our personal digital archives. Beyoncé has more than one full time archivist and supplemental archival help to do this work so her home movies are the highest quality and are given the best care imaginable, right? But they still suffer from errors due to the fragility of all audiovisual media. From the very beginning, at the point of capture, the images are imperfect, landing themselves onto fallible magnetic tape or into proprietary, binary black boxes.

As always, direct corrections to the Issues page or to me!

Last Thursday, I attended {Let’s Get Digital}, a symposium hosted by the National Digital Stewardship Residence program (New York) for Preservation Week. (And thanks, also, to Brooklyn Historical Society for hosting and Archives Round Table of Metropolitan New York (A.R.T.) for sponsoring!

The symposium opened with an overview of the NDSR program by 2014-15 resident, Vicky Steeves (resident at American Museum of National History, now working at NYU Libraries). It was great to hear how not only are the residents getting an incredible amount of experience and opportunities to grow within the organization and within the field (and given support to attend/speak at conferences, integral to the success of an emerging professional) but ALSO how much the digital preservation field is benefiting from the work being done by the residents (again, within the organization but also reaching far beyond just their individual organizations, an overall positive ripple effect throughout the whole field). Thanks, Vicky!

Carmel Curtis, resident at Brooklyn Academy of Music, discussed her learnings during her residency. Her residency’s primary project involved setting up a digital preservation policy for record retention. How long do you keep things? What do you keep? When can you throw something out? This is easier to comprehend when dealing with physical materials, but digital materials need a structure in place, too.

She had ten tips (and ten Clueless gifs):

  1. Work with IT
  2. Talk to as many staff as possible.
  3. Don’t shame people while investigating how they work
  4. Plan how you will record/transcribe (or if you will)
  5. Pick a format
  6. Base policies and standards off of the language of the staff
  7. Determine time
  8. Get legal advice (if you can)
  9. Limit who can transfer data to the archive per department (or however)
  10. Make note of the information stored in the databases or in other systems.

Genevieve Havemeyer-King also spoke on her experience as NDSR at Wildlife Conservation Society and setting up a digital preservation policy for the institution. She mapped the NDSA level of preservation to their functional requirements and developed a checklist. Matching all of the requirements together helped them come up with an ideal system for managing their assets now and into the future. Different organizations will have different needs, which is why doing this work is so important (and taking the care to do it right).

Rachel Mattson, Manager of Special Projects at La MaMa Archives, and Poorna Swami, Development Associate at La MaMa, held a workshop-discussion to talk about diverse ways of getting funding for your organization. Especially helpful was revealing what parts of their grant proposals received criticism from the grant reviewers — being vulnerable and open about successes and failures can help the field overall. I think a lot of times it depends on the batch of grant-reviewers you receive. Rachel said they were rejected for not paying a high enough wage, but I’ve also seen grants get rejected for having too-high salaries (when they were not very high at all). This is me speaking, not a recap of the talk, but more transparency would go a long way in helping grant authors know their audience and know what kind of projects to pitch. Something frustrating about grants is that they all have different rules/policies/reviewers and it’s unclear, even if you know what has been funded in the past, to know if your project is a good match.

One of Rachel’s points was to take a holistic approach to funding — there are grants, but there are also individual donors (small and large) that can help fund projects. Find people who care about the work of the organization (and have money to donate) and grow a donor base through them. Having also worked (for a little while) in development (the money kind), I thought about the importance of “friend raising” — start the conversation early on with philanthropists, ask to be introduced to their friends, hold events that really show off the importance of the work being done by your organization so they will want to support it.

A major resource pointed out by Poorna was Foundation Center, which lists people who have funded specific organizations. It is a paid platform, but access is free at their office or any of NYPL’s research or branch libraries.

There was some good discussion and a question about dealing with accounting, which I think brings up another important part of writing to win grants — what happens if you do win the grant? It’s important when writing and when thinking ahead to your organization’s annual budget that there is a lot of work that goes into dealing with having the grant that may not be written into the grant, even if your grant is a cost-share with your organization. In my experience, there are a lot of reports. There’s a lot of weird things that happen that were not planned. And sometimes people leave or other issues come up that complicate the original plan, so it’s important to know how to be agile and work around issues that will come up during the funding time.

Mary Kidd, NDSR for New York Public Radio, gave a talk on working with one of my most nostalgia-inducing formats, the MiniDisc. The talk was about how they are a preservation nightmare, but they are a warning to whats coming with regards to the kind of proprietary software we deal with now (like smartphones) and how access to data can be made difficult through “software firewalls” in addition to the technical, hardware problems that already come up. Mary was only able to access data on NYPR’s MiniDiscs because open source software built to solve the problem had been developed in the early 2000s.

This software was made because Linux users wanted to be able to access their MiniDiscs, which were only compatible with Windows machines. I think it’s important to think about how this open source software was made not with preservation in mind, but because of the problem with access when the software was still actively being developed and purposefully being restricted to one operating system instead of using open protocols.

After this talk, Carnegie Hall’s Kathryn Gronsbell (with support from Genevieve Havemeyer-King) held a workshop on using BagIt. First she talked about what this tool did and then explained why it’s useful to use this tool even though the tool’s actions are overall very simple (creates a folder, creates checksums). One major point was “Why would you do it yourself when you can automate it?” BagIt can fit into automated workflows so institutions don’t have to spend people-hours manually creating these organized structures before putting them into long-term preservation storage. I didn’t personally make a bag during the workshop (been there, done that) but it was still fun watching Kathryn make bags and sprinkle in some command-line tips on the fly when files weren’t opening.

So many great talks! But onward…

Next was a panel all about web archiving. I was getting a little fatigued during this marathon of information at this time, but was cool to hear about Archive-It and WebRecorder, think about how they are similar/different and can be used in different ways, NYARC documentation on web archiving, and Rebecca Guenther speak on how standards for web archiving were developed. WebRecorder is a real game-changer for the field and look forward to seeing its continued development (and very happy about their recent grant for a couple more years of development). Thanks Morgan McKeehan (NDSR at Rhizome) for giving a WebRecorder demo and linking us to some great web-archiving tools

The final talk of the day and most anticipated talk (for me, not just because it was the last talk) was Dinah Handel talking about open source software for audiovisual preservation! My favorite thing, as obvious by my resume and how I spend my free time. Dinah is an NDSR at CUNY TV, a broadcast production archive. Dinah truly had to speed-read through her talk and it was difficult at times for even me to keep up, someone who already knows about this stuff. But she provided a link to a transcription of her talk text so we can all review it later. Yay!

What I find so great about Dinah’s talk is that she makes it clear that she didn’t have experience in this at the beginning of her residency, and that any person with the will to do so can also learn how to do the things that she is doing — things that seem “too hard” or “too complicated” or “too technical.” Having similar experience, I feel similarly and fight really hard to break down these stereotypes but they linger on.

Her first tip was that she started to learn how to write scripts by reading scripts written by others. This is a great way to learn how to do things! Just try it out until it works. I’ve heard a friend of mine say “Lines of code are free.” Compiling is also free. It doesn’t cost you anything to just keep trying.

Dinah also went through a script she (presumably?) wrote one line at a time. This is my favorite way to learn new things and similar to Saron Yitbarek’s “Code Club” strategy. Trying to explain the complexities of audiovisual files and issues with archiving is such a challenge, and hard to do with limited time, but so glad it can be done in such a kind and friendly way.

And that was the event! I enjoyed hearing about the work and progress from each NDSR, a sort of mini-thesis-defense for the program, as they are all wrapping up their fellowships next month. I hope this program continues to receive funding and support, and I hope the future batches consider doing symposiums as well!

Resource for the talks are available on Github: https://github.com/dinahhandel/NDSRNY2016_Symposium

Ashley note: Super shout out to Peter Oleksik, Assistant Conservator at MoMA, who coauthored this blog post with me, was the impetus for the post existing at all, and provided very thorough factual information that turned my flippant guesswork-notes into real knowledge. Thank you, thank you Peter!!!

Post-posting Ashley note: Second post is now up!

Here’s more words at the intersection of Beyoncé and media preservation. (Yeah, by popular demand…) Let’s talk about aspect ratios in Lemonade.

History of Aspect Ratios

First, what’s up with aspect ratios? Aspect ratios are the kind of thing you only notice when they appear outside of the expected norm, unless you are the kind of person that thinks about media formats and standards a lot. Simply, an aspect ratio is the delimited height and length of an image.

It helps to see this visually represented within the context of each other. Here’s a link to some aspect ratios compared c/o Wikipedia.

The earliest aspect ratios came out of paintings which provided 4 right angles in which to “frame” the subject matter. This was then adopted by the theater and the advancement of the proscenium arch, which again allowed for a frame to compose the mise-en-scene of the performance. Photography borrowed from the paintings side of things to again “compose” an image and cinema followed suit to then project this composition in a theater. Think about how film works: light passes through rapidly moving images and casts shadows on the emulsion then, after the film is processed, these shadows are projected onto a screen. To make the basic mechanics of this work, sprocket holes exist on the sides of a very, very long piece of sturdy material (plastic, mostly), thanks largely to W.K. Dickson and his early kinetoscope. The way all of this technology-creation worked out is that we ended up with approximately, but not perfectly, a square. The reason for this is possibly arbitrary, either necessary for the sprockets or coming from a “full plate” daguerreotype (technology tends to build on previous iterations, as highlighted above, so the square was probably inherited in some respect). Over time, technology got better, formats got larger, and sometimes film frames got wider.

Edison took all credit for the kinetoscope and went on to standardize the 4:3 aspect ratio for film projection (well, in the US. There were also standardizations happening around the globe in the early 20th century). By the time broadcast television rolls around, and because engineers are lazy and just go with what came before, standards were being set because lots of systems made by lots of different companies all had to get along. It’s kind of like having to worry about if something is Windows-compatible or Mac-compatible because the underlying systems are intrinsically different from each other. Imagine if you had to buy a different TV to see different broadcast channels — it wouldn’t have taken off very well.

So what exactly does this have to do with Aspect Ratios? Well, with this standardization, the 4:3 aspect was basically “locked in.” However, Hollywood, fearing TV as a competitor, started to push the boundaries of the ratio into wider and larger formats. This’ll become more important later when these ratios start to become integrated into the 4:3 (and later 16:9)  aspect of video (analog, and more importantly, digital).

You may be asking yourself at this point  “uh, I thought this was about LEMONADE”? We’re getting there, but first, a brief explanation on why LEMONADE is so formally interesting. As you may have gleaned from the previous few ‘graphs, aspects adhere to standards and artists worked within the confines of them, choosing a ratio that suited them best for whatever they were trying to convey. This was important, as described above, because of the need of interoperability and aesthetic conformity (all TV sets are pretty uniform). However, we’re at a unique moment where we have a smorgasbord of ratios to draw from, and with displays shifting in their shape and size, we’re seeing ratios tossed into a blender and delivered up without a care about black bars obscuring the image or images stretched to fill a screen. LEMONADE is a perfect example of this “devil may care” attitude with ratios, using them to suit the stylistic purpose rather than making it fit a particular screen (is there even such a thing anymore?).

We’ll strive to talk about the aspect ratios used here and where they were used in the past and why they were used, but I’m not going to integrate any theories as to why the directors have chosen to use these aspect ratios, but feel free to do that yourself (that kind of speculation is at least another full blog post’s worth).

Mechanics of Aspect Ratios

Aspect ratio dimensions are typically displayed in two ways. In the first style, 1 is the height of the image and the first number is the width in relation to the height, with 1 as the base. 2:1 means that the image is twice as wide as it is high. But it’s also common to read aspect ratios in an easier-to-read format such as 4:3. 4:3 and 1.33:1 are the same concept, just represented differently.  This is obvious to your average 5th grader, but this is easy to forget about when you’re a person who hasn’t had to “math” anything in 20 years and aren’t thinking about these numbers in the context of mathematical representation.

So, representing an aspect ratio as 4:3 or 16:9 is just a more simple way to understand the same equation. They are used interchangeably, but some numbers break down cleanly and others don’t have an easy fraction. For example, 4:3 and 16:9 are easier to remember and type than the way their ratio breaks down in relation to 1 (1.33:1 and 1.77:1, respectively, with the training decimal of the first number continuing into infinity). A widescreen ratio like 2.35:1 is only going to break down to 47:20, which isn’t more pleasant to say than 2.35:1.

All of this to say that for the sake of consistency and clarity, when I talk about ratios I am going to primarily use the number broken down as it relates to the constant of 1.

Finally…Ratios in LEMONADE

OKAY, let’s get into this visual album masterpiece, LEMONADE.

The album starts off with a 2.2:1 aspect ratio. This is used as the standard in 70mm film (as mentioned above, widescreen formats were originally created to compete with the TV in the homes in the 1950s, but now are de rigueur in video as you’ll see below). The video here, though, was shot digitally and cropped to this ratio later (this is common in cinematography to compose for one aspect ratio when shooting in another). Which makes sense, because footage from this same shoot shows up later in different ratios. Anyway, 2.2:1 is wider than what was historically considered to be “widescreen” in United States cinemas (1.85:1) but less wide than the modern “widescreen” cinema screen (2.35:1). This is also slightly more cropped than the 2.33:1 aspect ratio (also known as 21:9) used as the ratio in contemporary television and computer screens.

This opening shot lasts 15 seconds and we don’t see that size again for the rest of the video.

For the second shot, we go even wider at 2.667:1, representing an aspect ratio of the widest possible lense range for Cinemascope (“full/silent”). Full/silent is because it’s at the maximum potential width and silent because audio tracks recorded on film take up space, typically the reason for the difference between a 2.667:1 ratio and 2.55:1 in the context of Cinemascope.

The third aspect ratio we see is 1.77:1 (or 16:9), which is the aspect ratio for the presentation of most HD television/video (nerd tangent: Dr Kerns Powers came up with 16:9 as it’s the mathematical mean of of the extreme ratios (4:3 and 2.35:1) Initially proposed as a compromise, this has become THE aspect ratio of the present). This is the aspect ratio most frequently used on the album. It also conveniently fits perfectly within Tidal at full screen on a MacBook Pro or an HD television streaming HBO, which are the preferred viewing methods. This is the new “full screen” in terms of the aspect ratio of most screens currently being sold today.

Did you miss this completely when watching Lemonade for the first time? That’s not at all surprising because these three aspect ratios hit us in a span of less than 30 seconds (and half of that time was the opening shot)!

Then just when your head is spinning from so many sizes of images all at once, the album goes totally wild by throwing in a mega-wide 3.5:1 ratio! Whaaaaattttttt!!!

I don’t even know how to talk about 3.5:1! It’s just SO WIDE. It is unconventionally wide.

Robyn’s Call Your Girlfriend is pretty close, coming in at 3.35:1 aspect ratio. Tangential, but this song is the opposite to the narrative Beyoncé is singing about in her album. Call Your Girlfriend is about telling the dude you are sleeping with to tell his girlfriend about the affair because it’s over between them. [Ashley note: I really like this song but I also think it’s incredibly rude and it will make me cry if I think about it too much.] Sorry, Robyn. Maybe Beyoncé just needed to take it a step further here.

After settling into these rapidly rotating ratios for a little while, along comes the “standard video” square image of 1.33:1 (4:3), historically the aspect ratio of all television up until a couple of years ago with the official crossover to HD television when broadcast networks adopted it across the board in the early 2000s.

Later, during “Don’t Hurt Yourself,” we return to a shot very similar to the opening, but the ratio is at 1.77:1.

See how similar they are, but with different ratios cut in post?

Some of the subsequent shots cut down to 2.48:1.

And goes back to 2.667:1 again.

The more-square 1.33:1 (or 4:3) ratio is used throughout, same as the above “standard definition television” but meant to represent a “home movies” feel.

However, we are going to save this for the next blog post which focuses on home movies and aspect ratios in the video based within that context.

Something else to note: “Sandcastles” is the only song segment of the album that keeps a steady framerate for the entirety. The ratio is 2:35:1 (widescreen/Cinemascope).

So that’s the aspect ratios in LEMONADE. As pointed out to us by Seth Anderson, this isn’t the first example of Beyoncé’s aspect ratio weirdness

To follow up with the “Formation” blog post, we do see some added format “errors” a few times, although much more sparsely than in “Formation.”

The most noticeable errors happen during “Love Drought.” There’s some emulsion scratches, light leaks, and off-balanced film sprockets (due to the film wind being too loose, the sprockets being broken, or the film being warped/shrunken/damaged). This (manufactured) error is similar to an error that could happen as a result of water damage, causing the film to warp and for the color printing to fade. It also looks like a light leak, but turning a tinted blue-green instead of an expected color (red/white).

Something to note is the frame line at the top of the image, so this is zoomed (presumably 35mm) or a fake frame line.

These beach-with-a-buddy scenes also have the visible frame line with more noticeable, irregular shadows along the top of the frame.

This camera has a bit of a “fisheye effect” going on with a very specific focal point in the middle. Parts of the shot that should be straight lines are curved more than usual and the edges of the frame are blurry. This is (likely) from a wider-than-average lense on the camera when recording these scenes and not something done in post-production.

Next post goes into the specifics of the different kinds of personal home movies seen at the end of the album. And just like last time, please direct corrections to the Issues page or reach out directly, maybe on twitter, for comments and discussion.

Today Dr. Carla Hayden is nominated to be our 14th Librarian of Congress, an incredibly smart and talented librarian dedicated to inclusivity. Also today was the official announcement of Open eBooks, a initiative that opens up free and easy eBook access to children and lays the technical foundation for making access to ebooks easier for everyone through the LibrarySimplified project (and many other things). These are both great strides towards a more democratic society and bring greater equal opportunity towards access to knowledge, and it’s great to see these things manifested in an area so close and so important to me.

On a personal note, today is the half-year anniversary of me working at NYPL, which I am thoroughly honored to do every day. Even on days where I’m scared that I don’t know what I’m doing, even on days when my neck hurts from poor desk-sitting activities, even on days when I disagree with someone or something. I feel so grateful to be working in this field I love and knowing that the work I get to do benefits everyone from my fellow librarians and archivists to every person who has open internet access. I’m so grateful to earn a living wage — and beyond that, a (what I consider to be a very) comfortable wage doing what I love and for that which I love, to support not only myself but two little living-breathing-snacking monsters.

Beyond this, I get to work on several open source projects and with organizations that shape the future of audiovisual preservation, something that causes a joy I feel deep down, from the 12-year-old person I once was.

Beyond that, I know so many wonderful, kind, and caring people, both professionally and personally, that make me a more well-balanced and better person every day. I can’t imagine not living in New York because I can’t imagine not having them, nor can I imagine not having access to instant messaging and instant communication with some of my closest friends who live far away.

I just feel so privileged to be able to do these things and to have this life and I hope I can continue working on helping others have the same. Especially health, being healthy is a privilege often forgotten unless the risk of its loss is immediately in front of you.

To be candid, I hadn’t been having a great time recently. (Hint: You can be grateful and still feel bad or tired or unhappy.) It wasn’t a lack of appreciation for all that I have, but maybe from focusing on some less-than-great things and working towards the personal recovery needed to move past them, which is normal and purely requires time and rest (and maybe more). But in the end, I am happy and I am healthy, and for that I am endlessly, endlessly grateful.

Format-ion: Video playback errors in Beyoncé’s latest music video

Like most people I know, I spent most of yesterday watching “Formation” over and over, and noticed Beyoncé really has a thing for faking media playback errors for some reason. And this new digital drop covers many errors and several formats all in one. You can also watch things get digital-ailments-buckwild in the video for Grown Woman, which mixes real home movies (that have real analog video problems) with the modern-day, grown-ass Beyoncé who has a video visual effects team to make things look bad on purpose.

But… what are those problems, how are they caused, and how do you avoid them? Ladies, let’s get in( )formation!

Film

I’m going to move through the video by format chronology, rather than video playback runtime. Film is the easiest to diagnose and oldest media format, and I saw a couple of (computer-generated replications of) errors rear their ugly heads! Let us slay.

Light leaks

This fuzzy, overwhelming red is due to the way film is/was created by specific exposures to light. If light enters the camera during the filming process, the underlying emulsion on which the images are stored is washed out and this sudden, excessive exposure to light is interpreted on film as a red glare. In a few shots, the fake light leaks in this video are a bright teal, which is unrealistic. Changes in hue other than red are more likely to occur in video playback. This error is unpredictable and occurs before the film has been developed, which is a chemical process that turns the negative image into the expected image that is then projected on a screen.

Dirty/scratched film

There are a few fake “film scratches” in the video, some of which are more realistic than others. There’s a subtle, white, vertical scratch added in one scene, which looks a lot like a real scratch caused from a large particle caught during a film being projected. There’s also a few scenes of added, irregular horizontal scratches that are bright green. These horizontal scratches happen, more likely during the handling process than through wear & tear of a machine, but it’s not expected for them to be a bright neon green (but not impossible, just improbable). Film scratches are usually vertical because it comes from being played on dirty projection equipment. Video scratches are horizontal because of the way they are played back. If the bright green “scratches” were less curvy, they might look more like a scratch on a video tape.

Blurriness

Blurriness is caused here by a post-production filter but could be caused from poor tension between the film and the projector or from the lens moving out of focus, the latter being a problem regardless of the format.

Video

Sync issues..?

This is a computer-generated replication of “video going wild” so it’s hard to pinpoint exactly what’s wrong, but if you see this, it’s probably some sort of tracking and skewing error generated (when occurring naturally) by some breach in communication between the magnetic tape and the device which is playing it back. This can be seen in consumer decks when stopping the tape abruptly, causing a disconnect of the magnetic signal and release of the tension between the tape and video head drum. Or maybe through messing with bad cords. Or maybe someone stuck a damn magnet on top of the tv again.

Lines

Analog video has a much, much lower rate than modern digital video (and thus, modern monitors where video is played back), which means each individual line is visible. You can think of the lines as pixels — something I no longer see using a Retina Macbook but barely visible on older machines, and very visible on much older machines. When seeing a low-resolution standard definition video transmission cut in between HD video, it’s more obvious.

(Also the “Play” text is looking a little wobbly here, maybe too much playback of a consumer-grade tape or you should clean the VHS heads on your deck?)

It’s also more obvious when it’s just a fake filter, which is the case here, but whatever. You can see in the image above that it doesn’t have the same line problem as in other clips even though they are meant to be part of the same tape. The above image is also an example of the applied ghosting as well as an oversaturated image, resulting in whites that appear “blown out” and a lack of true, rich black tones. There’s also some interlacing going on, which we’ll get to next.

Interlacing

Interlacing issues can be seen during movement, where squiggly lines appear in places of motion. The concept of interlacing involves each frame containing 50% of the line information required for a full picture, and having even and odd frames play back half of the information quickly enough would result in a full-looking image. This was done because it was faster to send video signals, like in the case of television, when this kind of technique is used. Now it just lingers around making everything look like garbage. There’s a good summary devoid of sentimental feelings available on Wikipedia though.

Ghost image

When a video signal, during transmission, is not properly balanced, it will cause a shadow-like image either to the right or left of the primary image. Softer secondary signals are created and cause this effect. If dealing with an analog asset, make the video deck “prove to you that it got some coordination” by using a time-based corrector and the ghost image should go away. If dealing with a digitized analog video, it is already burned into the digital stream upon transmission and the results are permanent (which is why getting the highest quality stream during analog-to-digital migration is so important).

This is similar to the issue of image lag which appears in video formats older than this one is intended to be (which is probably VHS). This is also maybe how the world looks if you follow Mrs. Carter’s advice a little too hard and you’re sippin’ Cuervo with no chasers.

Drifting bars

Sometimes a fuzzy, faded bar will appear to move across the image, which is caused as the tape begins to become worn out from being played so frequently, and possibly on low-grade or not-regularly-cleaned machines. This is kinda faked during the digital intro, which is just weird.

Dropout

Dropout is such a common video error! It’s practically “that video feeling”! Anyway, it’s caused by the wear and shedding of the magnetic particles on tape, which come from playback or just from long-term exposure to oxygen, dirt particles on the tape, poor environments, and tape mishandling. It’s kinda sprinkled in during the video segments but hard to really pinpoint other than during the “sync issues” section above. It shows up a lot in “Grown Woman” though (and in LIFE)!

Low res

Okay, we know fair Victorian Beyoncé and friends scenes were filmed with a standard HD camera, but for just a second there’s a low-resolution version of it at the 2:45 mark. Why? Why? Anyway, this is what things look like when filmed on a crappy, cheap camera. Again, the lightest part of the image is blown out, which leaves a blue shadow around the silhouette in the (relative) foreground. Also the aspect ratio looks weird.

Digital

Digital screens

Finally, don’t forget about digital video! The only intentional errors are at the beginning of the explicit version of the song, where a warning is typed on a screen that has been recorded with (presumably) a digital video camera. If you’ve ever pointed any kind of camera to a monitor, you’d notice that its refresh rate is not in sync with the rate of which images are captured on video, which results in a kind of visualized representation of what human eyes can’t naturally see: the screen refreshing itself. In this, you can see very small lines as well as a band that moves quickly in an upwards movement. Very old monitors would have a slow band move downward across the screen, according to a video recording. Modern computers refresh at a much quicker rate, which results in these light bars that resemble the video lines mentioned above.

“Glitching”

Some more “glitching” occurs in this scene (pixels becoming more focus and changing colors, blurred text, ghosting) but it’s not representative of something that would occur naturally on the screen or within the camera, just a cheap effect. Some of it sorta looks like its imitating a flickering from an electricity surge. Anyway, digital video glitching drives me crazy both in a professional context (because I slay) AND because I think it’s wack (also because I slay).

Codecs

There is another digitally generated “glitch” that I think is trying to capture the feelings of a clogged video head, but could theoretically be similar to a codec error that would leave a black bar temporarily over a portion of the screen on a per-frame or chunked-frame basis.

Conclusion

Anyway… one video, three generations of moving images. Remember all of these issues are a lot more likely to happen if you do dumb things like keep hot sauce in your bag (video preservation swag bag) so follow good media-handling practices. And remember if Beyoncé’s childhood home movies are suffering, what about yours? Please consider migrating to a safe digital surrogate as soon as you can, if you can afford to do so.

PS: Hello, video preservationists. I posted this somewhat in haste (aka this isn’t a damn whitepaper, y’all, it’s a blog post), so please direct advice and corrections to the Issues page or to me directly! I like collaboration more than fact-checking myself thoroughly out of fear of embarrassment. Get on this level and submit a pull request!

Special thanks to BAVC for creating the A/V Artifact Atlas, a free, open, and invaluable resource for the identification of video errors.

Blacklight is a discovery layer intended for library catalog records, which is great. Hydra also uses Blacklight as its discovery layer. The foundation of Blacklight is built upon Rails and SOLR. Plus Bootstrap for style and Devise for user authentication. And the big bonus on top is easy MARC ingest to suit librarians. But what if you don’t have MARC records? After fussing around with getting SOLR to appreciate my beautiful, custom non-MARC data, I decided to just give up and do it myself.

Usually that sounds like something that would end in a complete disaster, but I found that it was a lot easier to “roll my own” Blacklight using Rails, SOLR, and the gem sunspot, since I was using the tools I was already familiar with. I also didn’t need to set up Devise since user authentication wouldn’t be necessary for my purposes. Having worked with Devise before, I know it can get in the way (especially in the routes).

Because I was working with custom data, it was easy to import a CSV of that data right into a Rails model instantiated with all of the necessary fields.

After that, sunspot to the rescue! Sunspot makes indexing data into SOLR incredibly easy. It comes with built-in rake tasks that help take care of starting and stopping of the SOLR engine, which runs quietly in the background as you work and debug. It also comes with a rake task that can be used to reindex, also good while developing and figuring out what needs to go where.

For help with understanding sunspot, you could read the docs but I think you’ll have better luck, as I did, with Ryan Bates’s RailsCast episode on it. Seriously RailsCast has saved my ass so many times in making me actually understand how different code bases work.

Episode here: http://railscasts.com/episodes/278-search-with-sunspot?autoplay=true

If the supplemental code at the bottom isn’t enough, more code is here: https://github.com/railscasts/278-search-with-sunspot/tree/master/blog-after

The docs are pretty good but I found that there were a lot of syntax idiosyncrasies that weren’t fully explained that had me adjusting my searchable do block over and over again. Aspects like limit -1 are in the examples to show all facets, but doesn’t always work. These kinds of things had to be solved through trial and error. As a result of this, the code is also not very “DRY.” Which is okay with me because I’d rather have things that work and can be customized easily during the design stage over trying to be super-clever early on.

The difference between string and text also was not super clear to me, which might have been my own skimming fault. Tip: You can apply facets to strings. You can perform freetext search on text. I wanted both, so I had to index every column (oh yeah, I also wanted to index everything), I have to apply text and strings to every column.

After that setup is done, you can move into the controller.

First, set up facets!

1
2
facet :DocType
with(:DocType, params[:DocType]) if params[:DocType].present?

Then, set up search in a similar way:

1
2
fulltext params[:search]
 with( params[:category] ).equal_to( params[:search] ) if params[:category].present?

In the controller, setting @videos = @search.results will make it easy when moving to the next step, into the view.

For the view, fortunately I was just able to loop through all of the potential facets set in an array and build the facet column. Only a little bit of code is needed to display the search results: just loop through the results (for row in @videos) and (style it up however) and everything is set.

Here’s a couple of gifs demonstrating the final product, which is a database of testable Matroska videos for use with the MediaConch project, culled from ‘interesting’ Matroska videos uploaded to the Internet Archive.

So, moral of the story: If you are thinking about using Blacklight but don’t have MARC records, it might be easier to use sunspot and skip the cruft.

I’m not trying to act like I don’t care about git commit logs and streaks, because I do. In a greedy, childish way, I do and I will continue to stare deep into the soft green blocks of a Github streak. I even sit briefly-moody knowing that a job change in this year removed so many little boxes from the front half of this graph.

Blogging your resolutions and goals is a good way to put yourself to task. Last year I viciously wanted to do more (and I did). This year, I want to do less (but I probably won’t).

But my motto for this year is “Do a few things well.” (It’s also “no new projects” and you’ll see why later.)

…?

Last year, it made sense to be so dedicated to the hustle because I was struggling to get somewhere. At the start of this year, I am here — I am where I want to be, and I just need to remind myself that it’s okay to take a breath if I need to before continuing climbing up the mountain. That it’s okay to say no, sorry, I do not have time for that. It’s okay to not submit to speak at conferences. That it’s okay to spend a full Saturday doing nothing but clicking the ‘next’ button on Netflix if that is what I need to recover from a stressful week. I think this is something most people do without an anvil of guilt hovering above them, but I guess I am not most people.

Also, making time for friends is always, always worth it. I have felt especially grateful this year to know so many comprehensively wonderful people, so many that I can’t even count.

BUT. How can I put this…

Must my determination of success always be contingent on productivity levels, even through weekends and holidays? I’m not even an academic!

I don’t really want to (nor plan on) being less busy, but I have two problems: I care a lot and I care about a lot of things. And what I have lacked is an opportunity to focus more on fewer things and do them well. Design has been on my mind lately—- I’m not a bad designer (nor is it necessarily a truth that I am good, but I’ve had enough training and practice to side-eye inconsistent page-centering practices, et al), but all the design work I’ve done is pretty bad because it is done as an afterthought to solving another problem and I don’t give myself the time to do it well.

For some of this year, I was working as a ‘front-end developer’ on a project but ended up spending the majority of my time setting up site backups, wrangling database structures, and designing the site. Labels don’t matter that much to me, but at a point I became very aware that if my work was the be judged as a ‘front-end designer’, it wouldn’t make me feel very good considering only 1/6th of my time (of which was already limited — as nonprofit budgets are, no shame there) had been dedicated to that component. So by the time I got around to writing HTML/CSS/JS, you bet I popped Bootstrap on it and you might even find some !importants strewn throughout a hastily-gathered base CSS template (sorry my front-end pals if you’ve since left screaming — the site is not done, I will make the !important un-important in due time, but sorry about the decidedly un-hip Bootstrap). When this site goes live or the code is made open, will I seem bad or lazy? I like to think that I am neither, but the constraint of time would perceive me as such. This also goes along with me liking to think that I have a “strong attention to detail.” Doesn’t everybody like to think that about themselves? But when there are a million details instead of a thousand details, it’s harder to give them all attention.

I’m not close to burning out (maybe because I just basically spent the last two days Straight Chillin’), but recognizing burnout as a thing and taking preventative measures is important in this upcoming year. A problem for me is that I like doing it all but there is so much value in focusing on less in order to produce better results.

For context, here’s an abbreviated list of things I will fret over in 2016: New York Public Library (in many ways, in many applications, and most important of all), MediaConch, XFR Collective, QC Tools, ffmprovisr, Screen Slate, La MaMa, Actual Material, Code4lib’s video stream, AMIA committees, W*iHPEM formations… and I’m sure there’s more than one of that-other-thing-I’ve-already-forgotten-about and that-thing-I-don’t-yet-know-about. There are also other, small, secret things that are just for me and I am sad they are not given the care and attention I wish I had for them. When I see an amazing side project or personal portfolio site, my eyes always flash green for a second, envious of time. I also tend to add things to that pile — the personal-projects-pile — because it is so much easier and more fun to think of something I’d like to do rather than actually spend time doing it. So a side-goal is to not start anything new until I’ve finished what has been started.

But… right? That’s a lot of stuff. Keep me in check. Keep me in ample quantities of chocolate-covered espresso beans. 2016 is going to be great but only if we take care of ourselves and of each other.

For this year’s hackday, I brought ffmprovisr to the table. This was an app I made over a year ago that hadn’t been given enough attention, primarily due to lack of time. My fumbling pitch went something like this: “I think it’d be fun to combine and continue to build up these two projects into something better because ffmpeg continues to live on as a mysterious but necessary component of a/v archival practice. This project would be mostly R&D with some basic front-end web development skills (building forms). I feel this is a little out of the scope of hack day (and those greedy for rewards may seek refuge elsewhere) in that it’s more of a REMIX project and a mostly-hack-the-docs-with-some-coding project, but if there is interest (there was last year, for ffmprovisr) — we will build the hell outta this!”

I envisioned building on the old version of ffmprovisr, which was a guided form for building ffmpeg scripts, but on hack day I realized it was a little too heavy on the hand-holding — archivists that at least had ffmpeg installed on their computers didn’t need to click through forms and select their input and output. They could reasonably be expected to have the ability to look at a sample script and base their own script off of it. So we changed the structure from a form to a sample command line that also came with a description of what it did and a breakdown of how each command worked.

When originally pitching this idea, I thought it’d be a good “gateway drug” to capture archivists and turn them into developers, anticipating a lot of git knowledge sharing and code-writing habits. In the end, I was primarily the one pushing code (but also Rebecca Fraimow) while everyone else helped to add interesting sample scripts to our shared google doc, parsing commands pulled from the ffmpeg documentation from previous hackdays or dropping in scripts they use regularly as archivists. So the project ended up being even stronger than I imagined it would be (and helped provide the biggest lack from the existing proof-of-concept application)!

So the best part about hackday was the collabartive elements. It was great seeing collaboration happen live between BAVC employees and Reto Kromer, as well as get some in-use scripts from Catriona Schlosser from CUNY-TV and Nicole Martin from Human Rights Watch. While Eddy Colloton (MIAP) was working on a script that converts a DCP into access copies, help came in on our shared google doc from Kieren O’Leary (Irish Film Archive), participating remotely from Ireland.

We had been working on a forked branch of my original repo, but when I came home from AMIA, I did the right thing and gave ffmprovisr to the people — it now has a permanent home in the amiaopensource repo where collaboration has continued to thrive, mostly thanks to the strong support from Reto, who diligently modified the code to make it consistent and easier to read. Thanks, Reto!

P.S. super happy I went 2009-trendy-internet-tumblr-style with the name of this app because I saw tweets go out as ffmprovisor, ffmproviser, and ffmprovisr (and maybe a few other variations in between).

I’ve been kinda bad at blogging. I have a new job and some old jobs and lots of things that keep me from introspecting on code and archives. Fortunately I have great friends, and one of those friends is Kathryn Gronsbell, who reached out to me as an emerging developer (whether she thinks so or not) and wanted to write a guest blog post! As for me, a person who has very recently had to take on the codebase of at least five web complex and non-similar applications at my job… I can appreciate some good commenting. Commented-OUT code, on the other hand…

Show Your Work: Why I know commenting your code is bad, but I do it anyway.

Kathryn Gronsbell

2015-11-05

I like to ship ugly, early, and often. My challenge is I’m a Kathryn* who is in the middle of doing three very critical things:

  1. Learning enough Python and Bash to be functional (and dangerous)
  2. Building small scripts to expedite digitized asset quality control at my organization
  3. Trying to triangulate how and why certain decisions were made and performed before my arrival, and how to understand, fix/reverse, or move forward (see Steps 1 and 2)

In 2013, Peter Vogel explained why “people” should not document or comment their code [1]. Vogel makes well-executed arguments, steeped both in conceptual logic and pragmatism.

“Some readers suggested that comments provided insight into what previous programmers thought their code was doing. I don’t care what those programmers thought their code was doing — I only care what their code actually does (though I do care why the previous programmers thought the code was necessary).”

I agree with Vogel here, on all points. But, I do care what the previous person thought their code was doing because it can help explain OTHER decisions that are not documented in the code (or anywhere else). And because my code is wobbly, like a baby deer that snuck into a moonshine distillery, I feel the need to add comments and leave some less elegant code commented-out so I can understand how a certain chunk came together.

Example:

1
2
3
4
#This loops through the files and sets a variable “bambi” for the extracted string. This takes the legacy strings which are all uppercase and makes them lowercase alphanumeric strings so that they validate against the strings we create locally.
    for key in forest_fileDict:
        bambi = forest_fileDict[str(key)].lower()
        # bambi = bambi.lower()

It works, for me and for now. When it doesn’t work anymore I won’t do it anymore. Vogel’s argument about resource-intensity of code commenting reads true, but it also doesn’t apply to my current situation. Describing what the code does helps me learn (because I wrote it and had someone else verify it was accurate). I yearn for the day I can write something that is both self-describing and that I can read in 3 weeks and understand. Until then, I will crawl along with my #-prefaced nuggets of wisdom – and happily share my MacGyver’ed efforts for review and revision.

If I listened to this ideal practice, I wouldn’t be creating the processes I need to do my job because it would be too big of an obstacle to take on for every little process I needed done. The code will be ugly, but it will work. And that’s what we should aim for – an agile approach to learning and making. Imperfect action is better than perfect inaction.

I hope if you’re in a similar spot, you now know that you have a comrade. And that one day you, too, can be a “people” Vogel is talking to.

* Being a Kathryn does not involve being a real programmer or code guru. Being a Kathryn means spending hours on StackOverflow, sending over-explanatory email pleas to our very patient IT support folks, and constantly chatting friends about why my nested dictionaries aren’t printing as expected. And thanking very smart co-workers for helping crack uncrackable nuts.

Note: I wrote this up and added it to the !!Con github README but CC’ing it here for ~outreach~ purposes!

!!Con had a live stenographer for live closed captioning of all talks, and follow-up transcripts of all talks. AMAZING! The only conference I’ve seen rivaling this level of inclusivity was Virginia Tech’s Gender, Bodies, and Technology conference which offered live signing for all sessions not segmented into tracks. Also amazing.

2014 !!Con’s talks are all up on YouTube, which is great. But all of us, together, can collaborate to take these talks to the next level by syncing up those great transcripts with these great videos!

I know what you might be thinking: Doesn’t YouTube offer auto-captioning? The answer is YES, it does. And it’s pretty great for some things, but not so great for live talks in terms of reliability.

Introducing Amara. I’ve read about Amara but had never used it before today, and it is RAD. It makes subtitling SO easy, utilizing hotkeys and hitting all of the pain points that come with transcription (which I’ve done a lot of — not super easy).

So here is how to link the !!Con transcript to the shared video.

I’m sure there is a better way to do this, but I prepped the available transcription text file in SublimeText by adding a line break in between each existing line by holding down Control-Shift-DownArrow until it got the bottom, and then I pressed enter. You can do whatever works for you, but the text won’t import correctly as-is so this minor data-wrangling is necessary.

This way, when I went to upload the subtitles, they were already broken down into bite-size pieces. Some cleanup might be necessary after this point, like getting rid of blank subtitles.

You may be tempted to move ahead and tell Amara are finished so you can check to see if your work is working right. Just know that Amara doesn’t like it when you do this. It will assume you are done ingesting subtitles and will think you are ready for the timing zone. I spent a while fussing around in this area. The subtitles will save, though, so you can leave and come back as long as you save a draft.

Next is the fun part — Watching the video and getting the subtitles in sync. First, it’s fun because you get to watch the talk! Second, it’s fun because you press the “down” button to move to a new subtitle and you get in a rhythm that is both videogame-esque and meditation-esque. I like it. If you lose your groove, you can tab to pause the video or you can go back and use the slider to adjust the timing on subtitles on an individual basis.

After that, you can save/publish and you are done. The !!Con organizers are able to take it from there because the !!Con YouTube account is already ready to receive the Amara subtitles, which get synced on a regular basis. That’s it!

This is a GREAT way to feel super-productive even if you are feeling too under-the-weather to do other work. (Now that I’m all-freelance all-the-time, I’m super obsessed about whether I am at an optimal state to be billing hourly for my brain.)

Hmmmm… But wait — I did the video you see in examples above. But how do you know if the video you are gonna transcribe has already been done or not? What a great question. You can comment on this Google Spreadsheet and then everyone will know what is going on!

In closing… Don’t stop there! Plenty of conferences and other things could use help with reliable closed captioning!