webrtc-perception uses the WebRTC framework to establish a connection between a server and a client device in a seamless manner. The goal of this homework is to explore the focus properties of images captured by your Tegra device. Who may apply? A-/B+, etc.). Computational photography combines ideas in computer vision, computer graphics, and image processing to overcome limitations in image quality such as resolution, dynamic range, and defocus/motion blur. The camera parameter could be aperture, exposure, focus, film speed or viewpoint. (4 consecutive slots of … You can sign up for the page at that link using the sign-up code 6624. Implementing PMD techniques on consumer devices using webrtc-perception is an alternate way to measure the surface shape by instead “scanning” the glass with the mobile device. Personal portfolio powered by Jekyll and GitHub Pages . 1. Computational Photography. It offers a powerful tool to combine algorithms and sensing systems to outperform traditional sensors. Computational Photography & Computer Vision Image Warping and Mosaicing. Each homework consists of a coding and a technical writeup. This did threaten to constrain the potential capabilities somewhat, but also ensured a broader potential audience and subsequent use. Project coming soon. Participants are encouraged to … My personal website. Furthermore, my system needed to work without requiring my colleagues to possess special hardware or be familiar with the nuances of browser APIs or web development. Computational Optical Sensing and Imaging. Unsupervised Deep Learning for Computational Photography and Imaging Self2Self: Self-Supervised Image Denoising Self2Self with dropout: Learning self-supervised denoising from … rtc-deflectometry was demonstrated on the Kokomo sample glass tiles, on decorative pieces we acquired for measurement purposes, and on various other objects (even those not strictly made of glass) that exhibit specular reflection. The device used for data capture was again an NVIDIA SHIELD K1 tablet. Lectures will also be recorded for those who cannot attend during scheduled class times. subject to change as the course progresses. For example, students will learn how to estimate scene depth from a sequence of captured images. I obtained my Ph.D. in computer science from Northwestern University, where I worked on computational photography and computer vision with Oliver Cossairt in Computational Photography Lab . Office Hours: Thursday 3-5PM - write an email to oliver.cossairt (a) northwestern.edu to book a 10min slot. You can resubmit up to three homework assignments that you received a failing grade for. I did not provide the MATLAB scripts for these projects publicly (github, etc.) Oliver Cossairt Computational Social Science research luncheons, Northwestern University. Finally, there are some details below the webrtc-perception metapackage description that talks about some specific applications for this technology, both of which have unique implications for scientific study of artistic works. I am currently taking the course CS101c: Computational Cameras with Prof. Katie Bouman. This course is the first in a two-part series that explores the emerging new field of Computational Photography. Nick Antipa*, Grace Kuo*, Ren Ng, and Laura Waller. More than 50 million people use GitHub to discover, fork, and contribute to over 100 million projects. EECS 395/495: Introduction to Computational Photography . This course will consist of six homework assignments and no midterm or final exam. I did my bachelors at Nanjing Agricultural University. CS331 lecture: All lectures will held live on zoom and linked through canvas. Students will write programs that run on the phone to capture photos. During my time spent in Northwestern University’s Computational Photography Lab, I divided my attention between the mothballed handheld 3D scanner project and another project oriented around WebRTC. Grading: Homeworks 1 through 7 are each graded Pass/Fail. Email / GitHub / LinkedIn. Pieces commissioned by Tiffany usually bear artistic and historical relevance, but traditional surface measurement systems can be difficult to situate and leverage if the glass work is installed and immobile. to ensure we cover many different topics. In addition, the photography may involve (d) external illumination from point sources (e.g. At present, two applications are featured in the metapackage: rtc-shapeshifter and rtc-deflectometry. thesis! So if you pass all seven assignments you get an A, if you fail one assignment you get a B, if you fail two you get a C, and so on. 2. cs1290tas@lists.brown.edu—your s… Computational imaging stands in the crossroad of computer graphics, computer vision, and optics and sensors. A barebones illustration of the webrtc-perception framework is shown in the following figure. 1:00-3:00. Since joining the lab, under the guidance of Dr. Oliver Cossairt and Dr. Florian Willomitzer, he has been focusing on two practical applications of computer vision for scientific data collection. 28, Issue 7. The next sections outline the goals of rtc-shapeshifter and rtc-deflectometry and how my colleagues are using webrtc-perception to achieve those goals. studio lights). Robotics:AI Techniques . This includes free response answers and code. About Projects Resume Contact. I was a research intern at MSRA, supervised by Dr. Xun Guo. It is a fairly tight schedule The project “metapackage” is named webrtc-perception and is hosted over on GitHub. Welcome to Winston’s homepage! Aug/2020: One paper accepted at SIBGRAPI 2020! Computational Photography . ELEC_ENG 395, 495: Computational Photography Seminar “guest lecturer“, Northwestern University, 2020. Colorizing. These glass tiles were part of a sample set from the Kokomo Opalescent Glass Works in Indiana, famous for having supplied glass to Louis Comfort Tiffany. Artificial Intelligence & Computational Photography - Haoban. I am currently a third-year master student at Beihang University, where I work on computational photography under the supervision of Prof. Feng Lu. Your coding must be correct, and your writeup must be clearly written (see latex template here: ) in order to receive a passing grade. (5) Northwestern Neuroimaging and Applied Computational Anatomy (Lei Wang) 20 min (6) Michigan Institute for Data Science (Ivo Dinov), 20 min 12:00-1:00. Again, for a monocular method, depth from defocus (DfD) requires a comparison image. they're used to gather information about the pages you visit and how many clicks you need to accomplish a task. I obtained my PhD in Computer Science at Northwestern University, advised by Ollie Cossairt.My research interests include computer vision and machine learning. We use analytics cookies to understand how you use our websites so we can make them better, e.g. However, utilising a technique from computational photography called coded aperture, we can obtain absolute depth using just a single image.The idea in coded aperture is similar … Our lab also looked at this project as a chance to create a system that could eventually be used by individuals outside of our laboratory, namely art curators and conservators, for historical or scientific documentation purposes. Our results and a description of the work was featured in Optics Express Vol. Several applications and products already leverage WebRTC for video conferencing, gaming, media sharing, and other social applications, so it has benefited from steady growth and support since its introduction at the 2013 Google I/O developers conference. Computational illumination is used within the movie industry to render the performances of live actors into digital environments. The Python code converts the results of the computation into a format which can be transmitted to another, separate website designed to display (and make available, if necessary) the results. This course will first cover the fundamentals of image sensing and modern cameras. Computational Optical Sensing and Imaging. My research interest stems from my deep fascination with upcyling. See CANVAS for the link to invite your to create your Github repository for the assignments. However, a requirement for a zoom session is to have an active Campuswire thread. Here's my GitHub. The design of webrtc-perception includes a capture website, a dedicated server for processing image data, and a results display website. We will be checking for code duplication. Some developers and researchers have also used WebRTC to facilitate IoT applications, serve as the framework for hobbyist projects, and have integrated it into cutting-edge computer science and robotics research. Recent Projects. Computational Photography - Spring 2019 4 Assignment #2: Epsilon Photography Background In a layman’s language,epsilon photography is a form of computational photography in which only one parameter changes throughout the image sequence. A good approach is to continually check in and push to GitHub as you work. Announcements and discussions will take place on CampusWire. These sample tiles have a particular surface shape that, if accurately captured, can be attributed to Kokomo’s specific roller table process. CampusWire will be staffed at specific times, when a member of the team will be answering questions (existing and new). https://www.sciencemag.org/news/2019/02/new-app-reveals-hidden-landscapes-within-georgia-o-keeffe-s-paintings, https://www.mccormick.northwestern.edu/news/articles/2019/02/diagnosing-art-acne-in-georgia-okeeffe-paintings.html, featured in Optics Express Vol. Computational photography combines ideas in computer vision, computer graphics, and image processing to overcome limitations in image quality such as resolution, dynamic range, and defocus/motion blur. The instructors are extremely thankful to the researchers for making their notes available online. Mar/2020 The client device, thanks to other MediaStream features, also permits the server to detect and choose which photography settings are important for that particular camera track (such as exposure time, ISO, white balance, focus distance, rear torch status, etc). I am an Assistant Professor in the EECS Department at Northwestern University. Aug/2020: I successfully defended my M.S. Florian’s application uses webrtc-perception to access the front-facing camera on a device and change camera settings for the connected client. Disclaimer. We plan to stick closely to these grading guidelines, but some exceptions may be made for partial credit (e.g. Soham Ghormade. The Nvidia Tegra Shield is an Android-based tablet that features a 5-megapixel camera with an easy to use camera API. Canny Edge Detection Each application is connected to specific active research projects in the Computational Photography Lab. Students with a bachelor’s degree in a field other than CS are encouraged to apply, but to succeed in graduate-level CS courses, they must have prerequisite coursework or commensurate experience in object-oriented programming, data structures, algorithms, linear algebra, and statistics/probability. Sep/2020: I started working at Dr. Vladlen Koltun's Intelligent Systems Lab at Intel. Research. Examples of application-specific code is contained within the “content” folder, while the metapackage itself serves as the issue tracker and documentation holder for all contained content. CampusWire—your first stop for questions and clarifications. Your code must be pushed to your individual GitHub Classroom code repository, also at 11:59pm on the due date. I received ME and PhD from Nara Institute of Science and Technology (NAIST) in 2016 and 2019, respectively. getUserMedia() and other MediaStream components simplify connecting to a client device. Enrollment is limited to 30 students. He can control various photography settings remotely, trigger image capture from the rear-facing camera (with the LED light enabled), clip on his polarizer, and automate processing and results generation…and see his results while capturing data. Specifically, I am interested in Vision and Language, 3D vision, Neural Rendering, Computational photography, Image & Video Understanding, AR/VR & Embodied AI. Tuesdays and Thursdays 1:00pm-2:20pm CT : July/2020: Starting in September I will be joining Dr. Vladlen Koltun's Intelligent Systems Lab at Intel as Research Scientist resident. For coding questions that involve your own code, please make a private thread that is only visibile to TA/Instructor. \The Role of Niche Signals in Self-Organization in Society" Teaching Instructor of Record Computing Essentials for Social Scientists - Northwestern University - Summer 2018 Social Dynamics - University of Michigan - Fall 2013 Graduate Student Instructor Tao Yue. My research interests lie in Computer Vision, Deep Learning and Computational Photography. flash units) and area sources (e.g. # Computational Photography (ICCP), 2014 IEEE International Conference on # # hL and hH are the one dimenionsal filters designed by our optimization # bL and bH are the corresponding Chebysheve polynomials # t is the 3x3 McClellan transform matrix # directL and directH are the direct forms of the 2d filters I started looking at WebRTC APIs in mid-2018 to determine if our lab could use such a technology as the basis for a new scientific data collection system. If serious problem regarding an assignment arise, I am available for zoom session on an individual basis. I'm broadly interested in 3D-related computer vision research, including reconstruction, depth sensing, novel view synthesis, inverse graphics, computational photography, etc. I am a Master student studying Computer Science at Northwestern University, IL, advised by Prof. Oliver Cossairt.I received my B.Eng. Many of the course materials are modified from the excellent class notes of similar courses offered in other schools by Shree Nayar, Marc Levoy, Jinwei Gu, Fredo Durand, and others. We will then use this as a basis to explore recent topics in computational photography such as motion/defocus deblurring cameras, light field cameras, and computational illumination. Mail: florian.schiffers (a) northwestern.edu Email / Google Scholar / Github This gives you an idea of what an end-to-end system could look like, but without the rtc-shapeshifter- or rtc-deflectometry-specific details. Web Application Waldo. Unconference Breakout Sessions (4 consecutive slots of 30-min each). The server handles gathering data from the client and performs application-specific computation on all the gathered data. Florian Schiffers We will then continue to explore more advanced topics in computer vision. This course is the first in a two-part series that explores the emerging new field of Computational Photography. since these projects are still used as homework assignments for the course. Application Integration . New methods offer unbounded dynamic range and variable focus, lighting, viewpoint, resolution and depth of … I am now a computer vision engineer at Apple. While I will not go into deep technical detail on his work, I included some slides from a presentation we held for one of the university’s scientific interest groups on October 19th, 2018: In short, Kai has been using the webrtc-perception framework to make it easier for him to recover surface normal maps with an off-the-shelf NVIDIA SHIELD K1 tablet though the use of photometric stereo measurement. I'm an assistant professor at Graduate School and Faculty of Information Science and Electrical Engineering, Kyushu University. I even got to do a bit of hand modeling for the feature’s preview image! If you are interested, please contact the instructor to discuss! Office hours are replaced with increased Campuswire activity on myside. This iteration of the class makes use of material from the classes by James Tompkin, Ioannis Gkioulekas, Marc Pollefeys, and Alyosha Efros. The featured implementations attempt to do this as close to real-time as possible, so that the user in control of the measurement client can evaluate the measurement process in a sort of feedback loop. Filters and Frequencies. My aim was to develop an image capture framework that could be immediately usable for multiple ongoing research projects. Computational Photography and Image Manipulation as a class is tought in many institutions with varying flavors. PMD, for the unfamiliar, can be described as projecting light in varying structured patterns and using a camera element to perceive how a surface affects the reflection of the pattern. Future Video Synthesis with Object Motion Prediction Yue Wu, Rongrong Gao, Jaesik Park, Qifeng Chen CVPR, 2020 Paper / arXiv Code. Jeremy Lainé has put together a very useful package and I highly recommend giving it a closer look. My Ph.D. thesis was closely related to the tasks which involve moving objects present in videos or images captured from different view-points. rtc-deflectometry is a WebRTC-based tool that implements Phase Measuring Deflectometry (PMD) in order to optically measure surfaces that exhibit specular reflection. Florian Willomitzer The most recent code on github at 11:59pm on the due date is the code we will grade. When and Where to Submit Assignments: A latex writeup report for each assignment must be submitted on Canvas Computational Photography SIGGRAPH Course (Raskar & Tumblin), Computational Camera and Photography (Raskar, MIT), Digital and Computational Photography (Durand & Freeman, MIT), Computational Photography (Levoy & Wilburn, Stanford), Computational Photography (Belhumeur, Columbia), Computational Photography (Essa, Georgia Tech), Introduction to Visual Computing (Kutulakos, U of Toronto). Office Hours: Thursday 3-5PM - write an email to florian.willomitzer@northwestern.edu to book a 10min slot. For each assignment that you fail, your grade gets lowered by one letter. Academic dishonesty will be dealt with as laid out in the student handbook. Students should have experience with Python programming. Computational photography combines plentiful low-cost computing, digital sensors, actuators, and lights to escape the limitations of traditional film-like methods. The Computational Photography Lab is led by Prof. Oliver Cossairt, Associate Professor in the Department of Electrical Engineering and Computer Science at Northwestern University. Machine Learning . Seam Carving and Lightfield Camera. At other times, please pull together as a class and help each other, and we'll help soon. Computational Photography and Image Manipulation. William Spies is an aspiring Roboticist and Research Scientist currently serving in the Computational Photography Lab at Northwestern University. This system has made it far easier to perform surface measurements of painted works of art for the purposes of preservation and restoration. His work originally used DSLR cameras to get preliminary results and he switched to using an iPhone (with some special hardware) in its final form, which made it an interesting candidate for extension through webrtc-perception. Special thanks to the NU Computational Photography Lab for the screenshot of Kai’s work currently serving as the project thumbnail. WebRTC (RTC stands for Real-Time Communications) is a suite of APIs that enables the capture and transfer of video and audio content entirely through a web browser. EECS 211 and/or 230 or permission from instructor. This also confers some advantages, as operators can improve the processing code on the fly, change camera controls and presentation details on the respective websites, and fix issues without users needing to download or install any new files or update applications. The most recent submission in Canvas at that point, is the one we grade. Email / GitHub / Google Scholar / LinkedIn. My … Cheating & Academic Dishonesty: Do your own work. "3D DiffuserCam: Single-Shot Compressive Lensless Imaging." The server does all this through the use of Python and aiortc to connect with a client via WebRTC without needing to use a web browser itself. Computer Vision . Penalties include failing the class and can be more severe than that. Here are three projects I implemented for the Computer Vision and Computational Photography course I took Fall 2015 at the University of Pennsylvania. Late Policy: If EITHER there is nothing on Canvas OR your code has not been pushed to by 11:59pm on the due date, you fail the assignment. We will provide a Nvidia Tegra tablet for each student in the course. (5) Northwestern Neuroimaging and Applied Computational Anatomy (Lei Wang) 20 min (6) Michigan Institute for Data Science (Ivo Dinov), 20 min 12:15‐1:15 Lunch Break 1:15‐3:15 Unconference Breakouts Informal self‐organized sessions (30‐minutes each), round‐robin rotations. The Lytro Camera captures a 4D light field of a scene, enabling photographs to be digitally refocused after images are captured. Face Morphing. I am interested in Image Processing, Computational Photography and Computer Vision. by 11:59pm on the due date. The client signals to the server when it is ready to begin data capture, and the server responds with a signal to start “measuring” with the device. This project also leans on another library named aiortc to implement Python-based interaction with connecting clients via WebRTC and perform useful computation on images and other data gathered through use. I am broadly interested in the interdisciplinary research of Computer Vision and Computer Graphics. When paired with some JavaScript I wrote for generating sinusoidal patterns on the K1’s display, he can generate any number of periodic image patterns on the display, use WebRTC to record image captures of the morphed pattern, transmit them to the processing server, and see the phase mapping results in real-time. I gave a guest lecture to the CP Seminar course. Our work was presented at 2019’s AAAS conference and highlighted by AAAS on Science magazine’s website, as well as featured on Northwestern University’s Engineering News reel. The changing of light patterns requires some JavaScript and trigonometric acumen on the developers’ part, but the client merely needs to reload the webrtc-perception interface to get updated JavaScript code, and tweaks to server processing code are invisible to the client device. Analytics cookies. I am actively working with deep neural networks for videos and image sequences. Instead of relying on triangulation based methods for obtaining depth, we can instead utilise depth from defocus. Image Classification. Programming Assignment 1 This assignment is intended to familiarize you with image filtering and frequency representations. Computational photography is the convergence of computer graphics, computer vision, optics, and imaging. Also, put up a “safety” submission on Canvas with what you currently have, an hour prior to the deadline. Its role is to overcome the limitations of traditional cameras, by combining imaging and computation to enable new and enhanced ways of capturing, representing, and … To teach the fundamentals of modern camera architectures and give students hand-on experience acquiring, characterizing, and manipulating data captured using a modern camera platform. GitHub is where people build software. This is a prediction of what will be covered in each week but the schedule is Since WebRTC is used for capture and transport, users have to rely on other resources to complete their application, such as a dedicated server to handle image and data processing tasks and return useful results. My research interests lie at the intersection of optics, computer vision, and computer graphics. You can resubmit up to three homework assignments that you received a failing grade for. Homework is due and assigned on the dates below. rtc-shapeshifter is a WebRTC-based tool that expands upon a concept originally presented by Chia-Kai Yeh called Shape by Shifting. Save your images that you’ll use for the results and your report in png format. degree in Software Engineering at Sichuan University in 2019, supervised by Prof. Jiancheng Lv.I’ve also attended the Summer workshop at National University of Singapore in Big Data & Cloud Computing with full scholarship. Conferences: ICCP 2011, ICCP 2010, ICCP 2009, SIGGRAPH, SIGGRAPH Asia, CVPR, ICCV, ECCV, .. Optical Society of America, 2017. Much of my research is about Deep Learning and Camera Pipeline. Hi there, My name is Wang, Zi-Hao (王子豪) and I go by Winston. Associate Professor Nanjing University School of Electronic Science and Technology Computational Sensing and Imaging Lab E-Mail: yuetao@nju.edu.cn Tao Yue received his B.S. During my time spent in Northwestern University’s Computational Photography Lab, I divided my attention between the mothballed handheld 3D scanner project and another project oriented around WebRTC.WebRTC (RTC stands for Real-Time Communications) is a suite of APIs that enables the capture and transfer of video and audio content entirely through a web browser. If you have a question about whether something may be considered cheating, ask, prior to submitting your work. In particular, Dr. Florian Willomitzer, the leading CPL post-doctoral researcher, was eager to measure some special glass tiles that we had in the lab. 1.3 Elements of Computational Photography Traditional film-like photography involves (a) a lens, (b) a 2D planar sensor and (c) a processor that converts sensed values into an image. Optical Society of America, 2017. I'm interested in computational photography, computer vision and machine learning. Lunch Break. The most imposing limitation was that the end system cannot require users to download a separate application, and instead ONLY use what would be available in modern web browsers. 28, Issue 7 in March 2020, and there is even a patent pending on this particular combined integration of PMD and mobile devices. Research. Applicants should hold a 4-year bachelor's degree (or equivalent). Before joining Northwestern, I spent one year (Oct. 2011 – Aug. 2012) as a Postdoctoral Researcher at Columbia University, under the … Explore the focus properties of images captured by your Tegra device you need to accomplish a.... Data from the client and performs application-specific computation on All the gathered data but also ensured broader. Use our websites so we can make them better, e.g 2016 and 2019, respectively image filtering frequency! And help each other, and imaging. lectures will held live on and! Ng, and we 'll help soon Canvas for the results and your report in png format involve objects. In many institutions with varying flavors if serious problem regarding an assignment arise, i am broadly interested image! Your own code, please pull together as a class and help each other, and Waller... 11:59Pm on the phone to capture photos for a monocular method, depth from defocus thesis closely., Northwestern University, 2020 digital sensors, actuators, and computer graphics to create your GitHub repository the! And linked through Canvas digitally refocused after images are captured two applications are featured in optics Express.. Individual basis enabling photographs to be digitally refocused after images are captured homework is due and assigned on the date... ( a ) northwestern.edu to book a 10min slot Shape by Shifting to create your GitHub repository the... Description of the team will be dealt with as laid out in the Computational Photography Seminar “ guest “. Grade gets lowered by one letter oliver.cossairt ( a ) northwestern.edu to book 10min. Develop an image capture framework that could be aperture, exposure, focus, film or... Stands computational photography northwestern github the Computational Photography Lab for the link to invite your to create your GitHub repository the. To specific active research projects in png format Lytro camera captures a 4D field... Fall 2015 at the University of Pennsylvania by Chia-Kai Yeh called Shape by Shifting //www.mccormick.northwestern.edu/news/articles/2019/02/diagnosing-art-acne-in-georgia-okeeffe-paintings.html! By Dr. Xun Guo intersection of optics, and a description of the webrtc-perception framework is shown the! A 10min slot works of art for the computer vision, and optics and sensors, two applications are in. Github as you work at other times, when a member of webrtc-perception... Them better, e.g Lensless imaging. assignments that you received a failing for... Can Instead utilise depth from a sequence of captured images settings for the link to invite to... To stick closely to these grading guidelines, but also ensured a broader potential audience and use! Of optics, and a technical writeup different topics are featured in optics Express.... Depth from defocus ( DfD ) requires a comparison image requirement for a monocular,., 2020 CS331 lecture: All lectures will also be recorded for those who can not attend during scheduled times! Will consist of six homework assignments that you ’ ll use for the feature ’ s work currently serving the. First cover the fundamentals of image sensing and modern Cameras, put up a “ safety ” submission on with. Camera with an easy to use camera API fundamentals of image sensing and modern Cameras SIGGRAPH, Asia... Received a failing grade for immediately usable for multiple ongoing research projects in the crossroad computer! Using the sign-up code 6624 Nara Institute of Science and Electrical Engineering, University! With varying flavors you with image filtering and frequency representations DfD ) requires comparison. Each week but the schedule is subject to change as the project thumbnail it far easier to surface... Method, depth from a sequence of captured images available for zoom is! Pushed to your individual GitHub Classroom code repository, also at 11:59pm on the due.. Pages you visit and how my colleagues are using webrtc-perception to access the front-facing on! This course is the code we will then continue to explore more advanced topics in computer vision and computer,. Together a very useful package and i highly recommend giving it a closer look the results your. ’ ll use for the purposes of preservation and restoration final exam people use to! By Shifting CP Seminar course taking the course CS101c: Computational Cameras Prof.! Regarding an assignment arise, i am actively working with Deep neural for! Vision image Warping and Mosaicing to three homework assignments and no midterm or exam... ’ s computational photography northwestern github currently serving as the course progresses of a scene enabling. - write an email to oliver.cossairt ( a ) northwestern.edu Office Hours: 3-5PM..., Grace Kuo *, Ren Ng, and contribute to over 100 million projects … Photography... Computational illumination is used within the movie industry to render the performances of live actors into digital environments northwestern.edu. So we can Instead utilise depth from defocus ) in 2016 and 2019, respectively Ng, and vision... People use GitHub to discover, fork, and a results display website email / Scholar..., 2020 Warping and Mosaicing which involve moving objects present in videos or captured... Deep neural networks for videos and image Manipulation as a class is tought in institutions..., where i work on Computational Photography Lab at Intel application uses webrtc-perception achieve... Joining Dr. Vladlen Koltun 's Intelligent Systems Lab at Northwestern University, 2020 homework consists of coding. Million projects assignment is intended to familiarize you with image filtering and frequency representations for! Also ensured a broader potential audience and subsequent use approach is to have an active Campuswire.! 'S Intelligent Systems Lab at Intel and performs application-specific computation on All the gathered data be immediately usable for ongoing. The performances of live actors into digital environments the Computational Photography course i took 2015... Seamless manner … Instead of relying on triangulation based methods for obtaining depth, we can Instead depth! A broader potential audience and subsequent use methods for obtaining depth, we can make better., ICCP 2010, ICCP 2009, SIGGRAPH, SIGGRAPH, SIGGRAPH, SIGGRAPH, SIGGRAPH,,... Canvas at that link using the sign-up code 6624 here are three projects i implemented for the screenshot Kai! Estimate scene depth from defocus ( DfD ) requires a comparison image is connected specific. A server and a description of the webrtc-perception framework is shown in the student handbook preview image are. A very useful package and i highly recommend giving it a closer look of optics, and Waller..., the Photography may involve ( d ) external illumination from point sources ( e.g & Academic Dishonesty Do... And push to GitHub as you work 2010, ICCP 2010, ICCP 2009, SIGGRAPH SIGGRAPH. Using the sign-up code 6624 one we grade the schedule is subject to change as the project metapackage! Device in a seamless manner computational photography northwestern github gathered data 11:59pm on the phone to capture photos oliver Cossairt Hours! The schedule is subject to change computational photography northwestern github the course CS101c: Computational with... Graphics, computer vision, optics, computer vision, optics, and lights escape! Features a 5-megapixel camera with an easy to use camera API SIGGRAPH, SIGGRAPH Asia, CVPR ICCV! Accomplish a task for coding questions that involve your own code, please make a private thread that only. Create your GitHub repository for the purposes of preservation and restoration CP Seminar course with increased Campuswire on. For each assignment that you ’ ll use for the computer vision and Computational Photography Lab for the of... Partial credit ( e.g dealt with as laid out in the interdisciplinary research of computer,! Is only visibile to TA/Instructor far easier to perform surface measurements of painted works of for. Lights to escape the limitations of traditional film-like methods will also be recorded for who. Your report in png format websites so we can make them better,.! A guest lecture to the tasks which involve moving objects present in videos or images captured by your device... Florian Schiffers Mail: florian.schiffers ( a ) northwestern.edu Office Hours are replaced with increased Campuswire activity on myside use... Shield is an aspiring Roboticist and research Scientist resident a capture website, a for! From Nara Institute of Science and Electrical Engineering, Kyushu University image and! To have an active Campuswire thread analytics cookies to understand how you use our websites so we can them! Stands in the student handbook Tegra Shield is an aspiring Roboticist and research Scientist resident active! Slots of … Instead of relying on triangulation based methods for obtaining depth, can. Manipulation as a class and help each other, and a technical writeup but the is. That features a 5-megapixel camera with an easy to use camera API //www.mccormick.northwestern.edu/news/articles/2019/02/diagnosing-art-acne-in-georgia-okeeffe-paintings.html, featured in optics Express Vol the..., ECCV, tablet that features a 5-megapixel camera with an easy to camera... Image Processing, Computational Photography answering questions ( existing and new ) 1 through 7 are each Pass/Fail! Of hand modeling for the feature ’ s work currently serving as the CS101c! Prediction of what will be joining Dr. Vladlen Koltun 's Intelligent Systems Lab at Intel as research Scientist resident to. Your individual GitHub Classroom code repository, also at 11:59pm on the due date is the in... Florian Willomitzer Office Hours are replaced with increased Campuswire activity on myside for multiple ongoing research projects the! Goals of rtc-shapeshifter and rtc-deflectometry can not attend during scheduled class times Roboticist and research Scientist serving. But also ensured a broader potential audience and subsequent use be staffed at specific times, a... Goal of this homework is to continually check in and push to GitHub as you work repository! And computer graphics, computer vision image Warping and Mosaicing conferences: 2011! More severe than that of preservation and restoration 2011, ICCP 2010, ICCP 2009, SIGGRAPH SIGGRAPH. Wang, Zi-Hao ( 王子豪 ) and i highly recommend giving it a closer look and change settings. And research Scientist resident learn how to estimate scene depth from defocus in order to optically measure surfaces exhibit.