10 responses to “A3 — “Cloud-sourced” image analysis”

  1. janice roper

    Hi Sean,
    Thanks for an interesting look at a new technology. It is fascinating to consider how many pictures are taken each year, and how many photos must exist of certain popular landmarks and sites. I like that something somewhat practical can be done with some of these photos. I sometimes think about how many times I have taken pictures of the same landmarks, then to add in how many other people have almost the exact same, but slightly different, pictures of the Taj Mahal or the Halifax Citadel! There are a few different educational applications that I can think of – a photography class contributing to a bank of photos, an art or history class studying a landmark, looking at past and current images of a site. The ubiquity of cameras has changed the way we interact with the world around us and it is amazing to see how new technologies are being adapted to take advantage of that.


    ( 0 upvotes and 0 downvotes )
  2. michael orlandi

    While teaching drafting, I could see this technology being useful for looking at a famous area with a couple buildings, then discussing city site plans. Or being able to analyze a structure as it is being built. These are the ideas that comes to my mind as a shop teacher.

    You mentioned how this could keep google street view up to date. I am currently house hunting with my partner and I checked out a google street view of a house before we requested a viewing. The street view looked pretty decent (they also didn’t have a photo of the front of the house on the listing). I don’t know how old the street view was but it looked way worse in person. Clearly neglected since the time of the photo. It would of been nice to have an update photo. Then again I don’t know how many would take a photo of the house and upload it to the cloud…..but I could have appreciated it.


    ( 1 upvotes and 0 downvotes )
    1. sean gallagher

      Hi MIchael. I suppose it will be well into the future when sufficient photos of mundane places exist to allow this technology to model them, but while the aspect of this that most fascinates me is the use of existing images, there may come a time when, either for money or bragging rights, tech companies like Google use this or similar technology to volume-map everything. Remember when you pretty much had to live in a large city to see your house on Google Maps satellite view? Then pretty much everything could be seen on satellite view? Remember when street view was kind of the same? I suspect that once enough photographs exist to allow NeRFs to do their work, we’ll see more and more 3-D information about everything. Only a few trillion more pictures to go!


      ( 0 upvotes and 0 downvotes )
  3. Anton Didak

    Hey there Sean,
    I think the application for this would be well suited within education. Other fields of study like archeology, sociology, and even anthropology could benefit from this neet combination of photography and computer science. It is a shame an application has not yet been developed, but it would be enjoyable to play with and share creations in a google street view format.
    Do you think this technology will only be applicable for education and business purposes or be featured in a more user-friendly format like street view in the near future?
    Are there any other more accessible forms of technology that could be direct competition for NeRF?


    ( 1 upvotes and 0 downvotes )
    1. sean gallagher

      Hi Anton. I could certainly see this technology being rolled into Google Maps street view. As illustrated, it features crowdsourced photos (rather than photos purposefully shot) but the same algorithms for, say, removing pedestrians or accounting for lighting and weather suggest that if the Google Maps car is willing to drive around the block a few times, a passable 3-D model of the houses, stores, buildings, etc., could be possible with minimal effort on Google’s part.

      Could the same tech find a place in an end-user app? Perhaps. I could imagine it being used with, say, Augmented Reality to allow better integration of data/information and the real world scene, or to generate “reverse” 3-D models of interior spaces for various purposes. As far as accessible similar technologies go, they would appear to exist (something automated, after all, must be generating the blocky, lego-like 3-D view that already exists in Google Maps street view, but this is just a level up from that.


      ( 0 upvotes and 0 downvotes )
  4. kelvin nicholls

    Hi Sean,
    Thank you for opening my eyes to a world that I have not yet had the chance to consider. I have never thought about the sheer number of photos that are being taken each day, and I think that this technology that you have presented is an innovative and creative way to take advantage of the massive amount of photographic data that is out there. As I was watching the video that you added to your presentation, I immediately thought about video games, and how this type of technology could provide a replicated world that could be used in a virtual setting, such as a video game. Video games, such as Microsoft’s new Flight Simulator, have attempted to render an exact replica of the earth and it’s landscape through the use of mapping software, but I feel like this is the next level, especially considering the realistic detail of the 3D images that are being generated using NeRFs. Beyond the creation of the images, did you come across any speculative uses for NeRF?


    ( 2 upvotes and 0 downvotes )
    1. sean gallagher

      Hi Kelvin. I didn’t find any other specific uses for NeRF, above and beyond whatever we might imagine we could do with some automagically-generated 3-D models of places and things, though this could change either as the technology becomes feasible for things that aren’t buildings and such, or when we’ve amassed enough “cloud” images to do more than the CN Tower and similar. So the technology itself isn’t going to turn our lives, as we know them, on their heads, but I think it’s a good early example of what we might be able to do when we start to harness the power of what’s already out in the cloud, and in that respect, I suppose there’s considerable overlap with something like this and “Big Data”. Once we can analyze what’s out there passively and by machine, I could foresee doing similar things to analyze everything from fashion trends to global literacy. We’re making information faster than we’re using it, but we might catch up!


      ( 0 upvotes and 0 downvotes )
  5. Elixa Neumann

    This technology could really help improve the realm of 3D modeling, design, and printing. Utilizing new technology like this can improve our immersive worlds, enabling for stronger access to world sites through technology instead of traveling directly to the sites increasing our carbon footprints. There are a lot of ways this can benefit our economy and environment as well as you mentioned.

    What are some of the concerns or drawbacks you can see with this new technology? Who would own the rights to these models and concepts if they come from a built collective?


    ( 2 upvotes and 0 downvotes )
    1. sean gallagher

      Hi Elixa. I’m not sure I see any specific concerns with this technology, or at least not now when it seems limited to public landmarks and similar — if we “progress” to where we can model humans based on crowdsourced/cloudsourced photos then perhaps I’d have more to say! As far as ownership goes, while I’m not a lawyer I would have to assume that typical copyright considerations would not apply. If I take a photo of a hummingbird perched on a loonie, I will own the copyright for the reproduction or distribution of that image, but copyright would not prevent someone else from (say) using that image to figure out how big (or small!) hummingbirds are. In other words, we can use copyrighted images… we just can’t republish or distribute them.

      That said, insofar as the landmarks that are currently the subject of this technology are, in a sense, in the public domain, I would certainly hope that Google would make the 3-D models of them public as well. In classic Google style, I suspect they did this to show they could, not in the hope of a financial windfall.


      ( 1 upvotes and 0 downvotes )
      1. ben zaporozan

        I see something similar here to the transformation of Don Tapscott’s career from the crowdsourcing of ideas in Wikinomics [https://dontapscott.com/books/wikinomics/], how mass collaboration changes everything, to The Blockchain Research Institute [https://dontapscott.com/research-programs/blockchain-research-institute/] where an agent for change takes a deep dive into the strategic business advantages of connectedness.

        You would not own the copyright to that image of the hummingbird on the Loonie, since copyright on currency belongs to the government of Canada; but would they share rights with you as well as royalties, connected through blockchain to monetize and manage the proliferation of the image?

        The deep architecture of the internet of things could include crowdsourced lidar cave maps, video, and image, and your thoughts about the future potential are really interesting. Using neural radiance fields to optimize a volumetric presentation of a scene by and for the collective good of the planet seems a worthwhile test, and you can start with some opensource code on GitHub: https://github.com/bmild/nerf. We will watch eagerly!


        ( 0 upvotes and 0 downvotes )

Leave a Reply

You must be logged in to post a comment.