Open Source Bridge 2017

This week, I went to Open Source Bridge in Portland, OR. It’s a conference “for developers working with open source technologies and for people interested in learning the open source way.” Usually I spend a lot of time taking notes for myself and others via tweeting, but this time I decided to chill on the tweets and try to wrap things up as a blog overview instead.

Day 1!

Tech Reform

Nicole Sanchez asked us to share what we thought the most important issues were using the hashtag #techreform on twitter and would be aggregating this and creating a Github repository with these broken down as Issues.

Inclusive Writing Workshop

Was unsure about this at first, as a person feeling fairly well-versed in these topics, but it was very helpful to think through these things through hands-on activities such as interviewing people and writing biographies inclusively. Thank you, Thursday!

Stenography

Josh Lifton from Crowd Supply showed us how interesting stenography is and what a great open source community has started thriving around it!

Why low tech?

“Who are we democratizing things for?”

I got a lot out of this talk because I’m interested in thinking through webpage payload size as a major barrier for low-connectivity regions, a crucial component to accessibility that doesn’t get (IMO) as much discussion. This talk gave some good examples, like YouTube’s homepage alone creating a barrier for low-speed internet regions because it was so big, people learning to code on their non-smart phones, advocacy via phonelines when the internet is intentionally shut down, and about how video needs to work in desktop/mobile in more ways than “size of display.” Glad to see these can be tested in DevTools, by throttling down to see how slow a page loads using 2G, etc. In the slides you can see a great analogy to explain web slowness in terms of “it’d take you this long to walk this distance and to load a 10mb webpage on a 250kbps connection.”

Cryptography

Niharika Kohli gave us a historical overview of ancient and semi-ancient cipher techniques. She discussed steganography, microdots, printer yellow dots, image steganography (turning a tree into a cat), Transposition cypher, Rail Fence Transposition, Route Transposition Mono-alphabetic substitution cipher, Caesar Cipher and frequency analysis, the “Unbreakable Cipher”, Jefferson Disk, Beale Ciphers, Charles Babbage, Arthur Zimmermann, and more. The best part about this was getting quizzed at the end and how badly the audience did at solving ciphers quickly and easily.

Democratizing Data

Slides

What should we ask as developers to push the needle further? Lorena Mesa was asking all of the right questions here:

  • People who have access and make publications of our data, where’s the line here?
  • Are the people making software purchasing decisions doing right byt the people working on the ground?
  • How do we have conversations around ethics of data cultivation and usage?
  • How does design in software change the social fabric around us? (E.g., Airbnb’ing while black)
  • What kind of ethics training do you do for your team?
  • Let’s talk about our data lifecycle – what is your organization’s policy? Storing, using, but what happens after its been a use in a while, or no longer useful?
  • Are we thinking about deleting data or how to repurpose data? Are we thinking about how to give people permission to remove their data?
  • If you are a startup that gets bought, what happens to that data?

She recommends:

Day 2!

At the end of the second day, Andrew Weaver and I spoke about open source tools used by a/v archivists. Day 2 was a little heavier in the Hacker’s Lounge, reviewing our talk before giving it.

Of (biased) note is a talk from Travis Wagner about connections between open source and MLIS classrooms, wherein which he teaches at the University of South Carolina.

Day 3!

Day 3, I ditched my laptop so I don’t have notes. Some highlights:

Thanks so much to the Open Source Bridge volunteers, fellow speakers, and conference attendees!

Cry Map: Greater Boston Area, ca. 2010

Last week I was describing the Greater Boston Area as a city I’ve pretty much cried all over. I didn’t live there for very long (and not consistently, either, so this time period is really little sections of mostly 2008, 2010, and 2012…I think). But when I did live there, I was notably very weepy. So I mapped it out and my description was… not wrong. I cried all over multiple cities, constantly. And it’s interesting, I think, to see all of these memories put on a map, just my own personal memories of places. Millions have people have memories of the going to the Trader Joe’s in Coolidge Corner, but my memory happens to be about crying over avocados and/or trying to process feelings associated with having been swiftly dumped.

At this point, I guess you’re already like “Oh! Gosh! Ashley! You poor thing, I am so sorry!” But don’t worry about it! I wouldn’t be doing this if I was still rolling around a little pit of sorrow. At this point in my life, it’s just personal data. And now I can bundle and transfer all of that melodrama to you, dear reader. This will be worse for you than for me. Plus, to assuage you even more, I know from past/current projects that it’s powerful to expose one’s former vulnerabilities when at the appropriately distanced vantage point. I guess I could have made a “great times!” map, but where’s the fun in that?

Anyway, let’s talk about technical things now.

I got to work creating all of these data points from memory via geojson.io, strolling down different memory lanes. This was an interesting memory exercise, how good or not-good I was at remembering where things were and identifying them on a map (and then deciding if I’d purposefully change the location slightly). I could pinpoint a birthday party I went to in Somerville almost to the block just based on feelings looking at the map but had no idea how to find an apartment that I lived in for 3 months. I had to look up the address and only then realized I was basing the location on a commute I used to have, and I was mentally calculating walking in the other direction (maybe this is also because Boston is the worst and based on horse trails instead of using grid-like reason).

Whoops! That still wasn’t technical. Really, there wasn’t that much to do. I set up a one-page webpage with very sparse CSS and relied on Leaflet and leaflet-ajax to show and style the map. It’s not much more complex than their geojson example documentation. Instead of the default OpenStreetMaps, I swapped it out with something more moody here and added a custom icon of the crying emoji. Sorry, you come here for technical blog posts and I don’t deliver and you just end up finding out I once cried in a warehouse parking lot in Peabody after an ex-boyfriend forcibly took my car keys and left me there. Errr, source code is available here.

So, anyway, at your own inevitably-cringing risk: Cry Map, Greater Boston Area, ca. 2010!

P.S., N.B., I will gladly help you make your own Cry Map – just get your geojson in order!

How to livestream and record a conference when you have no money

I’ve explained this to so many people and thought about this so much that I had to check and make sure I definitely have not written about this on my blog before, but it turns out that I have not.

First, what is this going to be about?

This is a guide to setting up a lo-fi but totally acceptable livestreaming and conference video recording situation using affordable equipment, much of which can be found or borrowed.

If you can afford a service like Confreaks, I really recommend it! They are very nice people and do great work. However, this guide is for when you are in a low-budget situation and it’s better to do it yourself than not do it at all. No excuses!

What do I need?

Hardware:

  • A camera

    Any old handheld camcorder that can plug into a computer will work. Two is even better (one for the speaker, one for the slides)!

  • A tripod

    Tripods go a long way in producing stable video streams. You don’t want to try to set up a camera without one.

  • An analog-to-digital converter

    I recommend BlackMagic brand because they use an open source SDK. This is the cheapest option. This one is fine. This one is the best, if you are looking for an excuse to buy one for digitization anyway – it’s worth the extra $. I’ve used the latter two successfully but I’ve seen the first in action.

  • A computer

    An average laptop is fine. The above converters are Thunderbolt, so make sure to buy a USB3 one if your computer only has USB3.

  • Cables!

    You’ll need cables to connect the camera to the converter, converter to the computer. Audio from source to the converter (if possible). I don’t know what cables you will need, but The Cable Bible might be a valuable resource to you.

  • Optional: external harddrive

Software:

What do I do?

Try to get into your venue a day in advance to setup and run a test trial. It’s very stressful to try to do this right before the conference is about to start, and if you are reading this, there’s a large chance that you are also organizing the conference and are already very stressed.

First, set up the video camera on the tripod, connect it to the analog-to-digital converter, and connect the converter to the computer. You’ll probably have to install the BlackMagic drivers so the computer knows what is being plugged into it (this is familiar to Linux users or people who used computers in the 20th century and less familiar to those who haven’t – installing drivers!). Install one of the above software (OBS or Wirecast) and run it, and it should help you through the process. This is where you will set up the ability to record the footage to your harddrive (Warning: You may need an external harddrive to store footage, as it takes up a lot of space).

If you are lucky, everything will work and you’ll see a video stream on your computer eager to be recorded. If you are less lucky, you might spend a few hours debugging and trying to get the video to appear on the screen by changing various settings. Good luck to you.

Next, you’ll need to plug into the sound system that amplifies the speaker’s voice to the conference room. The cables you will need will depend on that setup, and you might need some extensions to run the cable from the front to the back of the room, if necessary. Plug a cable from the sound system into the analog-to-digital converter and you should hear it broadcasting in the software. If the conference room is very small, you might be able to get away with using the camera’s audio (but it will not be great, so only do this as a last resort).

Adjust the settings in the recording software so video is coming from the camera and audio is coming from the sound system.

With this setup, someone will have to monitor the camera and toggle between the speaker and their slides while streaming. If you are just recording and planning to edit the videos later, you can keep the camera on the person speaking and collect slide decks and edit them together in post-production. A second camera can be added to the setup to just record the screen and toggling between the two can happen within the software.

Okay, next, head over to YouTube and their Live Dashboard. This is where you configure YouTube to start streaming when the conference is ready. Make sure to test out a livestream before the conference starts so you are comfortable with all of the settings, how to turn the stream on-and-off, and mostly ensure that it works.

And that’s it! The hardest part is the setup! When recording, make sure there are two people monitoring the stream and swap out volunteers, so no one gets too tired and grumpy. Watching a stream and also monitoring many levels of social media for people complaining about the service you’re providing for free can be very exhausting.

** Bonus note! Do NOT stream any audio under copyright over YouTube or they will take your video offline. It sucks but that’s how it goes, even if you do it accidentally and only for a minute (this happened to us at Code4lib2016). It’s the tradeoff for using YouTube. If your conference is playing fun music during the breaks, just make sure to mute or pause the livestream during this time.

Wait, why should I listen to you?

I have a whole bunch of years of experience working with video in a preservation setting and little bit in a production setting. I have set up and run livestreaming for Code4lib and No Time To Wait, and have given this advice to those intending to livestream several conferences (with full success). I also know that this is the setup used by conference videostreaming experts. So if you don’t trust me, trust them (via me)!

External Resources

The planning/resources list from Cod4lib 2016 is available here.

The resources list from No Time To Wait! is listed at the bottom of this README.

Making an I Ching application in Javascript

This is the process I used to go about building a small, for-fun webpage. I hope this can help demystify the process of building a website, even if just a little bit.

My first step was the figure out how the I Ching worked, which means going to Wikipedia and reading about it. This scared me a little bit, because the process is pretty complicated. You know, it’s like the O.G. algorithm. There’s an algorithm used in cryptography (and used in macOS’s /dev/random) named after one of the I Ching divination methods, the Yarrow algorithm. Well, that’s enough “research” because now I know what I need to know – I either need to build one of these divination algorithms, find one, or pretend to.

Then I went to existing I Ching sites to see how they were built – they all seem to be built in PHP and therefore run in an obfuscated way (so I couldn’t rip them off).

I found the I-Ching App of Changes and the snippet of open source code that constructed the “yarrow method” of I Ching hexagram construction. What a lifesaver! A lifesaver with an MIT license on it! Now I can be confident that my website will be using the best of the best for oracle-consulting.

Taking a break from the parts that might involve math, I set up some JSON by doing some copy-pasting trickery from Wikipedia and multi-cursor pasting in a rad text editor (I am currently using Atom). Computers are amazing and it only took a few clicks instead of a painful copy-paste for each part of the 64 hexagrams. That’s 64x3! I added a definition, the symbol that represents each hex (cool to find although not reliable), and the number (so I can link to other websites that base results on the hexagram number).

The hard problem was the algorithm and that was solved. The easy problem was how to apply the algorithm and produce a cast hexagram and a transformed hexagram based on six clicks (by throwing yarrow six times and counting, or using three coins, etc). Looking at the open source yarrow-sorting code, I came up with a plan to have 1 represent an unbroken and unchanging line, 0 to represent a broken and unchanging line, x to represent an unbroken line changing to broken (strong to weak), and o to represent a broken line changing to unbroken (weak to strong).

Each click of the button does this sort and applies the results (one of the four options) to a string, resulting in a string that can be “decoded” into the first, cast hexagram, and second, transformed hexagram, by switching the x and os out for what they were and what they would become. This was initially hard to wrap my head around, but once I understood it, it was very easy to implement in Javascript so each hexagram could be displayed by pulling the codes out of the hexagrams.json file.

(I also didn’t feel like standing up a server for testing this, and the json file isn’t very big, so I actually faked it and made a faux hash inside of a variable. Whatever works.)

But, for example, if the yarrow algorithm throws “11xxo1” as the result, that would turn into 110011 (which is the hexagram 61, Centre Conforming) and change to 111101 (which is the hexagram 14, Great Possessing)

Now that the meat of the application was done, I could set up the DOM. I thought about using React until I realized it didn’t make sense to use a framework with a million dependencies just to render a little bit of content, and I used tried-and-true good-old-friend jQuery. I’m old and not that hip, so my code is, too.

With the DOM set up, I could now dress it up. This used to be my least favorite part because I would get so frustrated with CSS that I would want to scream, but now I don’t feel that way anymore.

I wanted mood ring vibes. To do this, I went to Codepen, searched for color changing background, and cruised until I found something that fit what I wanted. Then I copy-pasted that sucker and made adjustments until it looked right and had the right tones. If copy-pasting code from the internet is wrong, I don’t ever ever wanna be right (nor have I ever ever been right). This made the page look extravagant and beautiful and mature but it took very little work.

In more practical matters, Skeleton has been my not-Bootstrap framework of choice. Google Fonts hooked me up with some additional fancy.

That’s that. That’s how a single-page simple webpage works! This only took me a couple of hours, but two years ago it may have taken me all day! Sometimes it’s hard to tell you are learning anything when you are always learning, so this was a good exercise to go back and try something small to realize that you have learned at all. My code is online, as usual, if you want to use it as a reference and learn how to make your own divinations.

Minimum Viable Station Documentation: Recipes

Last month I created and posted about a Minimum Viable Station document. There was such an overwhelming and positive response to this doc! I’m really so happy to see it expand into such a wealth of expert advice. However, I don’t want newbies to feel overwhelmed at the size of this document. I want anybody, at any level, to be able to set up their own digitization system within the constraints of their financial situation, space, and time. Also, this existing doc doesn’t get down into the nitty-gritty details of how much a setup would cost (which is going to vary dramatically, even when confined to ‘recipes’, as you’ll se below), which is important when planning any project (especially when you have to ask other people to spend money, or have to raise it yourself).

So I made a recipes document!

Minimum Viable Station round two! Let’s do this! Does your institution have a setup you’d recommend to others, or have you done this work for grant-seeking or internal-financing purposes? Please contribute your specs! Do you see something wrong? Please leave a comment in (either of) the document(s)!