Apply for community grant: Personal project (gpu)

#1
by AlekseyCalvin - opened

Greetings, noble-hearted administrators of Hugging Face!

In lieu of starting off with an immediate appeal or proposal, I will first let my recent works/experiments speak for themselves. Further explications and proposals follow below.

Ada Lovelace, from an 1850's daguerrotype, colorized using Deoldify + manually by me, interpolated/enhanced using RIFE:

Three early photographer friends playing dominoes on a terrace, by Gustave Le Grey, 1847, colorized using Deoldify + manually by me, interpolated/enhanced using RIFE:

Lighthouse, by Gustave Le Gray, 1840s, colorized using Deoldify + manually by me, interpolated/enhanced using FiLM:

Arthur Rimbaud (right) and another communard during the Paris commune:

Village on the water, 1840s, colorized using Deoldify + manually by me:

Charles Baudelaire, from an archive of Atelier Nadal, colorized using Deoldify + manually by me:

A stage actor, from an archive of Atelier Nadal, colorized using Deoldify + manually by me:

Pierrot actor, from an archive of Atelier Nadal, colorized using Deoldify + manually by me:

The earliest surviving photograph, 1827:

Four early photographers on the terrace, 1847, colorized using Deoldify + manually by me:

I have many many more where these came from.

What I do not have, unfortunately, is a stable source of income. As a multimedia/literary translator and a multimodal artist, I substantially rely on publicly available tools to sustain my workflows, experiments, and projects. Presently, I've been single-handedly conducting a fairly large-scale conceptual project of colorizing hundreds of early photographs (daguerrotypes, calotypes, salt prints, etc) - from between roughly 1839 (the effective launch date of photography) and the early years of the 20th century - and then animating them using image-to-video models. My interest is particularly in bringing to life/motion photographs from before the invention of cinematic methods, and thereby rendering more vivid a fairly lengthy slice of historical time that had been widely and diversely documented via photographs, and thus accessible to us in the form of still visuals, but hardly at all (besides a few curios like zootropes and later chronotropes) in motion, and only far more indirectly in color (via paintings, drawings, lithographs, other early color art prints, etc), leading to the persisting greater degree of perceptual and cognitive disassociation within our habituated modes of relating to 19th century visual documents (despite their widespread open access/public domain availability), in contrast to 20th century documents. Though this disparity may be clear enough to identify, I would argue that when it comes to addressing or curbing it, we've simply lacked adequate tools or/and forms (mediums) to do so. Whether or not current SOTA machine learning tools (namely, SVD) may be up to par is something I aim to investigate with this project and its associated practices, which I consider forms of multimodal creative translation of documentary sources.

I should note that my academic background is in post-graduate Translation theory/studies as well as Comparative/World Media, though I am not presently working at/through a university, though this may eventually change as I keep extending and expanding my current project scope by more informal means, and am already considering certain related proposals to bring back to the academic fold. As such, this project may presently be considered personal, but with the potential of becoming academic, as I consider bringing forth concrete research proposals. The most obvious idea to me currently would be to propose a study surveying and contrasting subjects’/participants’ perceptual sensibilities of some particular historical period/context/event/person following exposure to 1) a set of contemporaneous paintings/artworks depicting the object/period of interest 2) a latter-staged/fictional film related to it 3) source grayscale still photos documenting the object/period of interest 4) colorized still photos 5) same colorized photos animated using SVD 6) same source grayscale photos animated using SVD.

In any case, I hope this covers what I'm trying to do with this space: to use it as a more reliable tool specifically to conduct the above-detailed and illustrated translations of/extrapolations from early photographs. Seeing as I do not have sufficient hardware resources to conduct this work locally, nor the monetary means to fund enough GPU-equipped compute server/"space-time" to work with the thousands of photographs I would hope to bring into this. Applying for such a grant may be the most obvious step. I may eventually also submit a related proposal/grant appeal directly to Stability.ai. However, Hugging Face support would in this instance actually be more immanently and obviously helpful.

With warmest regards,
A.C.T. soon

Sign up or log in to comment