To solve this problem, I have employed an image-to-image network, called lens, that is put in front of the classification network and is trained adversarial to make it harder for the classification network to classify the images. An additional “reproduction loss”, i.e. a penalty for editing the image too much, ensures the lens can only remove shortcuts.
The first row of the below image shows an example training dataset of grayscale images of goose and flamingos, where the flamingo images have watermarks added to them. The second row shows the output of the lens. As you can see, the lens is partially removing the watermarks, and partially adding watermarks to the goose images. This makes the subsequent classification task much harder, such that a trained network doesn’t use the watermarks as clues.
Full details about this research can be found in the published CVPR-Workshops paper:
Nicolas M. Müller*, Jochen Jacobs*, Jennifer Williams, and Konstantin Böttinger. “Localized Shortcut Removal,” 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vancouver, BC, Canada, 2023, pp. 3721-3725, DOI:10.1109/CVPRW59228.2023.00382, avaliable at the Computer Vision Foundation
*equal contribution
A more detailed version is available on ArXiv:
]]>Nicolas M. Müller*, Jochen Jacobs*, Jennifer Williams, and Konstantin Böttinger. “Shortcut Removal for Improved OOD-Generalization,” arXiv:2211.15510 [cs.CV]
*equal contribution
There is also a paper that developed from this research, presented at ACM SAP 2019 and published in ACM TAP 2019. You can find both the paper and the bachelor thesis below.
Jochen Jacobs, Xi Wang and Mark Alexa - TU-Berlin - Published in TAP 2019, September 2019, Issue 3 (Special Issue on SAP 2019)
DOI: 10.1145/3353902
PDF: here
Head-mounted displays cause discomfort. This is commonly attributed to conflicting depth cues, most prominently between vergence, which is consistent with object depth, and accommodation, which is adjusted to the near eye displays.
It is possible to adjust the camera parameters, specifically interocular distance and vergence angles, for rendering the virtual environment to minimize this conflict. This requires dynamic adjustment of the parameters based on object depth. In an experiment based on a visual search task, we evaluate how dynamic adjustment affects visual comfort compared to fixed camera parameters. We collect objective as well as subjective data. Results show that dynamic adjustment decreases common objective measures of visual comfort such as pupil diameter and blink rate by a statistically significant margin. The subjective evaluation of categories such as fatigue or eye irritation shows a similar trend but was inconclusive. This suggests that rendering with fixed camera parameters is the better choice for head-mounted displays, at least in scenarios similar to the ones used here.
Jochen Jacobs - Jul 05, 2019
PDF: here
In recent years, head-mounted displays have become increasingly popular. However, the well known vergence-accommodation conflict causes significant amount of discomfort. Some research has proposed to dynamically adjust the rendering to change the eye vergence needed for the currently focused object to match the accommodation.
This thesis uses camera vergence and separation to change the eye vergence based on the users gaze. To determine the gaze depth, a probabilistic algorithm is proposed that uses both measured eye-convergence and scene geometry. An experiment is conducted to determine whether the dynamic vergence and separation changes reduce fatigue. Fatigue is measured objectively (pupil diameter, pupil diameter variance, eye movement speed, blink rate, and reaction time) and subjectively using a questionnaire. The experiment is also used to evaluate the quality of the depth estimation algorithm.
Results show, that the output of the proposed algorithm to estimate the eye vergence depth is better than simple raycasting and ray intersection algorithms. However the objective evaluation of the adjustment method shows that it does not help reduce fatigue but in fact increases discomfort. The subjective evaluation was inconclusive but trends go into the same direction.
]]>windy-plugin-sun-position
.
Unfortunately Plugins do not work in the app currently
Open Plugin | View Code on GitHub |
You can also open the plugin by going to Menu → Install Windy plugin → Select “Sun position”.
To open the display, right-click on the map (or tap and hold on mobile), then select “Sun Position”. Then open a weather picker to see the sun dial and the details on the left.
The dial displays the current sun and moon azimuth on the map using a black line from the picker position. Additionally, dashed lines show the azimuth of sunrise and sunset and dotted lines show the azimuth of moonrise and moonset. Clicking on the sunrise, sunset, moonrise and moonset lines will set the current time to the respective time.
The detail pane on the left shows the time of astronomical, nautical, and civil dusk and dawn; start and end of blue and golden hour and solar noon. Moonrise and moonset times are also added to the timeline. Below, a diagram of the sun and moon altitudes over time is displayed. Below that, details about the current sun and moon position are shown.
On the top of the detail pane it is possible to enable and disable individual displays. The telescope toggles visibility of astronomical sun details (astronomical and nautical dawn and dusk). The camera toggles visibility of blue and golden hour times. The moon toggles visibility of moon details.
Original release
Fixed timezone issues: