To download data sign in with GitHub
rows 10 / 227
event | event_url | updated | title | url | text | html |
---|---|---|---|---|---|---|
GLAM mix'n'hack 2019
|
2020-02-06 14:44:00
|
Back to the Greek Universe
|
Back to the Greek Universe
Back to the Greek Universe is a web application that allows users to explore the ancient Greek model of the universe in virtual reality so that they can realize what detailed knowledge the Greeks had of the movement of the celestial bodies observable from the Earth's surface. The model is based on Claudius Ptolemy's work, which is characterized by the fact that it adopts a geo-centric view of the universe with the earth in the center.
Ptolemy placed the planets in the following order:
Moon
Mercury
Venus
Sun
Mars
Jupiter
Saturn
Fixed stars
The movements of the celestial bodies as they appear to earthlings are expressed as a series of superposed circular movements (see deferent and epicycle theory), characterized by varying radius and speed. The tabular values that serve as inputs to the model have been extracted from literature.
Demo Video
Claudius Ptolemy (~100-160 AD) was a Greek scientist working at the library of Alexandria. One of his most important works, the «Almagest», sums up the geographic, mathematical and astronomical knowledge of the time. It is the first outline of a coherent system of the universe in the history of mankind.
Back to the Greek Universe is a VR model that rebuilds Ptolemy’s system of the universe on a scale of 1/1 billion. The planets are 100 times larger, the earth rotates 100 times more slowly. The planet orbits periods are 1 million times faster than they would be according to Ptolemy’s calculations.
Back to the Greek Universe was coded and presented at the Swiss Open Cultural Data Hackathon/mix'n'hack 2019 in Sion, Switzerland, from Sept 6-8, 2019, by Thomas Weibel, Cédric Sievi, Pia Viviani and Beat Estermann.
Instructions
This is how to fly Ptolemy's virtual spaceship:
Point your smartphone camera towards the QR code, tap on the popup banner in order to launch into space.
Turn around and discover the ancient greek solar system. Follow the planets' epicyclic movements (see above).
Tap in order to travel through space, in any direction you like. Every single tap will teleport you roughly 18 million miles forward.
Back home: Point your device vertically down and tap in order to teleport back to earth.
Gods' view: Point your device vertically up and tap in order to overlook Ptolemy’s system of the universe from high above.
The cockpit on top is a time and distances display: The years and months indicator gives you an idea of how rapidly time goes by in the simulation, the miles indicator will always display your current distance from the earth center (in million nautical miles).
Data
The data used include 16th century prints of Ptolemy's main work, the Almagest (both in greek and latin) and high-resolution surface photos of the planets in Mercator projection. The photos are mapped onto rotating spheres by means of Mozilla's web VR framework A-Frame.
Earth map (public domain)
Moon map (public domain)
Mercury map (public domain)
Venus map (public domain)
Sun map (public domain)
Mars map (public domain)
Jupiter map (public domain)
Saturn map (public domain)
Stars map (milky way) (Creative Commons Attribution 4.0 International)
Primary literature
Simon Grynaeus: Kl. Ptolemaiou Megalēs syntaxeōs bibl. 13, public domain
Peter Liechtenstein: Almagestum CL. Ptolemei Pheludiensis Alexandrini astronomorum principis opus ingens ac nobile omnes celoru motus continens, public domain
Secondary literature
Richard Fitzpatrick: A Modern Almagest, An Updated Version of Ptolemy’s Model of the Solar System
John Cramer: The Ptolemaic System, A Detailed Synopsis
Astrophysikalisches Institut Neunhof: Das Weltmodell des Ptolemaios
Version history
2019/09/07 v1.0: Basic VR engine, interactive prototype
2019/09/08 v1.01: Cockpit with time and distance indicator
2019/09/13 v1.02: Space flight limited to stars sphere, minor bugfixes
2019/09/17 v1.03: Planet ecliptics adjusted
Media
Back to the Greek Universe Video (mp4), public domain
Team
Thomas Weibel (weibelth)
Cédric Sievi
Pia Viviani (pia)
Beat Estermann (beat_estermann)
concept,
dev,
design,
glam
|
<h2 class="sectionedit1 page-header pb-3 mb-4 mt-5" id="back_to_the_greek_universe">Back to the Greek Universe</h2>
<div class="level2">
<p>
<a class="media" href="https://www.thomasweibel.ch/back-to-the-greek-universe/" rel="nofollow" title="https://www.thomasweibel.ch/back-to-the-greek-universe/"><img alt="Simulation of the Ptolemaic system of the universe" class="media img-responsive" src="/wiki/_media/project:greek-universe.jpg" title="Simulation of the Ptolemaic system of the universe"/></a>
</p>
<p>
<a class="urlextern" href="https://www.thomasweibel.ch/back-to-the-greek-universe/" rel="nofollow" title="https://www.thomasweibel.ch/back-to-the-greek-universe/">Back to the Greek Universe</a> is a web application that allows users to explore the ancient Greek model of the universe in virtual reality so that they can realize what detailed knowledge the Greeks had of the movement of the celestial bodies observable from the Earth's surface. The model is based on Claudius Ptolemy's work, which is characterized by the fact that it adopts a geo-centric view of the universe with the earth in the center.
</p>
<p>
Ptolemy placed the planets in the following order:
</p>
<ol class=" fix-media-list-overlap">
<li class="level1"><div class="li"> Moon</div>
</li>
<li class="level1"><div class="li"> Mercury</div>
</li>
<li class="level1"><div class="li"> Venus</div>
</li>
<li class="level1"><div class="li"> Sun</div>
</li>
<li class="level1"><div class="li"> Mars</div>
</li>
<li class="level1"><div class="li"> Jupiter</div>
</li>
<li class="level1"><div class="li"> Saturn</div>
</li>
<li class="level1"><div class="li"> Fixed stars</div>
</li>
</ol>
<p>
<a class="media" href="/wiki/_detail/project:universum.png?id=project%3Aback_to_the_greek_universe" title="project:universum.png"><img alt="Renaissance woodcut illustrating the Ptolemaic sphere model" class="mediaright img-responsive" height="328" src="/wiki/_media/project:universum.png?w=319&h=328&tok=8836b0" title="Renaissance woodcut illustrating the Ptolemaic sphere model" width="319"/></a>The movements of the celestial bodies as they appear to earthlings are expressed as a series of superposed circular movements (see <a class="urlextern" href="https://en.wikipedia.org/wiki/Deferent_and_epicycle" rel="nofollow" title="https://en.wikipedia.org/wiki/Deferent_and_epicycle">deferent and epicycle</a> theory), characterized by varying radius and speed. The tabular values that serve as inputs to the model have been extracted from literature.
</p>
<p>
<a class="urlextern" href="https://www.youtube.com/watch?v=fm-YscWz1Xc" rel="nofollow" title="https://www.youtube.com/watch?v=fm-YscWz1Xc">Demo Video</a>
</p>
<p>
Claudius Ptolemy (~100-160 AD) was a Greek scientist working at the library of Alexandria. One of his most important works, the Almagest, sums up the geographic, mathematical and astronomical knowledge of the time. It is the first outline of a coherent system of the universe in the history of mankind.
</p>
<p>
Back to the Greek Universe is a VR model that rebuilds Ptolemys system of the universe on a scale of 1/1 billion. The planets are 100 times larger, the earth rotates 100 times more slowly. The planet orbits periods are 1 million times faster than they would be according to Ptolemys calculations.
</p>
<p>
Back to the Greek Universe was coded and presented at the Swiss Open Cultural Data Hackathon/mix'n'hack 2019 in Sion, Switzerland, from Sept 6-8, 2019, by Thomas Weibel, Cdric Sievi, Pia Viviani and Beat Estermann.
</p>
</div>
<h2 class="sectionedit2 page-header pb-3 mb-4 mt-5" id="instructions">Instructions</h2>
<div class="level2">
<p>
<img alt="" class="medialeft img-responsive" src="/wiki/_media/project:qrcode.png?w=250&tok=ab8362" width="250"/>This is how to fly Ptolemy's virtual spaceship:
</p>
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> Point your smartphone camera towards the QR code, tap on the popup banner in order to launch into space.</div>
</li>
<li class="level1"><div class="li"> Turn around and discover the ancient greek solar system. Follow the planets' epicyclic movements (see above).</div>
</li>
<li class="level1"><div class="li"> Tap in order to travel through space, in any direction you like. Every single tap will teleport you roughly 18 million miles forward.</div>
</li>
<li class="level1"><div class="li"> <strong>Back home:</strong> Point your device vertically down and tap in order to teleport back to earth.</div>
</li>
<li class="level1"><div class="li"> <strong>Gods' view:</strong> Point your device vertically up and tap in order to overlook Ptolemys system of the universe from high above.</div>
</li>
</ul>
<p>
The cockpit on top is a time and distances display: The years and months indicator gives you an idea of how rapidly time goes by in the simulation, the miles indicator will always display your current distance from the earth center (in million nautical miles).
</p>
</div>
<h2 class="sectionedit3 page-header pb-3 mb-4 mt-5" id="data">Data</h2>
<div class="level2">
<p>
The data used include 16th century prints of Ptolemy's main work, the <a class="urlextern" href="https://en.wikipedia.org/wiki/Almagest" rel="nofollow" title="https://en.wikipedia.org/wiki/Almagest">Almagest</a> (both in greek and latin) and high-resolution surface photos of the planets in Mercator projection. The photos are mapped onto rotating spheres by means of Mozilla's web VR framework <a class="urlextern" href="https://aframe.io/" rel="nofollow" title="https://aframe.io/">A-Frame</a>.
</p>
<p>
<a class="media" href="https://commons.wikimedia.org/wiki/File:Whole_world_-_land_and_oceans_12000.jpg" rel="nofollow" title="https://commons.wikimedia.org/wiki/File:Whole_world_-_land_and_oceans_12000.jpg"><img alt="Earth" class="media img-responsive" height="200" src="/wiki/_media/project:earth-small.jpg?w=400&h=200&tok=68f1c1" title="Earth" width="400"/></a>
Earth map (public domain)
</p>
<p>
<a class="media" href="https://commons.wikipedia.org/wiki/File:Phobos_Viking_Mosaic_DLRcontrol_7200.jpg" rel="nofollow" title="https://commons.wikipedia.org/wiki/File:Phobos_Viking_Mosaic_DLRcontrol_7200.jpg"><img alt="Moon" class="media img-responsive" height="200" src="/wiki/_media/project:moon-small.jpg?w=400&h=200&tok=5ba1bc" title="Moon" width="400"/></a>
Moon map (public domain)
</p>
<p>
<a class="media" href="https://commons.wikimedia.org/wiki/File:Mercury_global_map_2013-05-14_bright.png" rel="nofollow" title="https://commons.wikimedia.org/wiki/File:Mercury_global_map_2013-05-14_bright.png"><img alt="Mercury" class="media img-responsive" height="200" src="/wiki/_media/project:mercury-small.jpg?w=400&h=200&tok=d2be2a" title="Mercury" width="400"/></a>
Mercury map (public domain)
</p>
<p>
<a class="media" href="https://commons.wikimedia.org/wiki/File:Venus_map_NASA_JPL_Magellan-Venera-Pioneer.jpg" rel="nofollow" title="https://commons.wikimedia.org/wiki/File:Venus_map_NASA_JPL_Magellan-Venera-Pioneer.jpg"><img alt="Venus" class="media img-responsive" height="200" src="/wiki/_media/project:venus-small.jpg?w=400&h=200&tok=e7bcd9" title="Venus" width="400"/></a>
Venus map (public domain)
</p>
<p>
<a class="media" href="https://commons.wikimedia.org/wiki/File:Map_of_the_full_sun.jpg" rel="nofollow" title="https://commons.wikimedia.org/wiki/File:Map_of_the_full_sun.jpg"><img alt="Sun" class="media img-responsive" height="200" src="/wiki/_media/project:sun-small.jpg?w=400&h=200&tok=44b73c" title="Sun" width="400"/></a>
Sun map (public domain)
</p>
<p>
<a class="media" href="https://commons.wikimedia.org/wiki/File:MGS_MOC_Wide_Angle_Atlas.png" rel="nofollow" title="https://commons.wikimedia.org/wiki/File:MGS_MOC_Wide_Angle_Atlas.png"><img alt="Mars" class="media img-responsive" height="200" src="/wiki/_media/project:mars-small.jpg?w=400&h=200&tok=e65f67" title="Mars" width="400"/></a>
Mars map (public domain)
</p>
<p>
<a class="media" href="https://commons.wikimedia.org/wiki/File:Jupiter_Cylindrical_Map_-_Dec_2000_PIA07782.jpg" rel="nofollow" title="https://commons.wikimedia.org/wiki/File:Jupiter_Cylindrical_Map_-_Dec_2000_PIA07782.jpg"><img alt="Jupiter" class="media img-responsive" height="200" src="/wiki/_media/project:jupiter-small.jpg?w=400&h=200&tok=5577d0" title="Jupiter" width="400"/></a>
Jupiter map (public domain)
</p>
<p>
<a class="media" href="https://photojournal.jpl.nasa.gov/catalog/PIA18437" rel="nofollow" title="https://photojournal.jpl.nasa.gov/catalog/PIA18437"><img alt="Saturn" class="media img-responsive" height="200" src="/wiki/_media/project:saturn-small.jpg?w=400&h=200&tok=3cc7d5" title="Saturn" width="400"/></a>
Saturn map (public domain)
</p>
<p>
<a class="media" href="https://commons.wikipedia.org/wiki/File:ESO_-_The_Milky_Way_panorama_%28by%29.jpg" rel="nofollow" title="https://commons.wikipedia.org/wiki/File:ESO_-_The_Milky_Way_panorama_%28by%29.jpg"><img alt="" class="media img-responsive" height="200" src="/wiki/_media/project:stars-small.jpg?w=400&h=200&tok=2378e3" width="400"/></a>
Stars map (milky way) (<a class="urlextern" href="https://en.wikipedia.org/wiki/en:Creative_Commons" rel="nofollow" title="https://en.wikipedia.org/wiki/en:Creative_Commons">Creative Commons Attribution 4.0 International</a>)
</p>
</div>
<h2 class="sectionedit4 page-header pb-3 mb-4 mt-5" id="primary_literature">Primary literature</h2>
<div class="level2">
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> Simon Grynaeus: <a class="urlextern" href="https://www.e-rara.ch/bau_1/content/titleinfo/4235133" rel="nofollow" title="https://www.e-rara.ch/bau_1/content/titleinfo/4235133">Kl. Ptolemaiou Megals syntaxes bibl. 13</a>, public domain</div>
</li>
<li class="level1"><div class="li"> Peter Liechtenstein: <a class="urlextern" href="http://farside.ph.utexas.edu/Books/Syntaxis/Almagest.pdf" rel="nofollow" title="http://farside.ph.utexas.edu/Books/Syntaxis/Almagest.pdf">Almagestum CL. Ptolemei Pheludiensis Alexandrini astronomorum principis opus ingens ac nobile omnes celoru motus continens</a>, public domain</div>
</li>
</ul>
</div>
<h2 class="sectionedit5 page-header pb-3 mb-4 mt-5" id="secondary_literature">Secondary literature</h2>
<div class="level2">
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> Richard Fitzpatrick: <a class="urlextern" href="http://farside.ph.utexas.edu/Books/Syntaxis/Almagest.pdf" rel="nofollow" title="http://farside.ph.utexas.edu/Books/Syntaxis/Almagest.pdf">A Modern Almagest, An Updated Version of Ptolemys Model of the Solar System</a></div>
</li>
<li class="level1"><div class="li"> John Cramer: <a class="urlextern" href="https://digitalcommons.kennesaw.edu/ojur/vol5/iss1/3" rel="nofollow" title="https://digitalcommons.kennesaw.edu/ojur/vol5/iss1/3">The Ptolemaic System, A Detailed Synopsis</a></div>
</li>
<li class="level1"><div class="li"> Astrophysikalisches Institut Neunhof: <a class="urlextern" href="https://www.astrophys-neunhof.de/mtlg/sd10013.pdf" rel="nofollow" title="https://www.astrophys-neunhof.de/mtlg/sd10013.pdf">Das Weltmodell des Ptolemaios</a></div>
</li>
</ul>
</div>
<h2 class="sectionedit6 page-header pb-3 mb-4 mt-5" id="version_history">Version history</h2>
<div class="level2">
<p>
2019/09/07 v1.0: Basic VR engine, interactive prototype<br>
2019/09/08 v1.01: Cockpit with time and distance indicator<br>
2019/09/13 v1.02: Space flight limited to stars sphere, minor bugfixes<br>
2019/09/17 v1.03: Planet ecliptics adjusted
</br></br></br></p>
</div>
<h2 class="sectionedit7 page-header pb-3 mb-4 mt-5" id="media">Media</h2>
<div class="level2">
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> Back to the Greek Universe <a class="urlextern" href="https://make.opendata.ch/wiki/_media/project:greekuniverse-final2.mp4" rel="nofollow" title="https://make.opendata.ch/wiki/_media/project:greekuniverse-final2.mp4">Video (mp4)</a>, public domain</div>
</li>
</ul>
</div>
<h2 class="sectionedit8 page-header pb-3 mb-4 mt-5" id="team">Team</h2>
<div class="level2">
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> Thomas Weibel (<a class="wikilink2" href="/wiki/user:weibelth" rel="nofollow" title="user:weibelth">weibelth</a>)</div>
</li>
<li class="level1"><div class="li"> Cdric Sievi</div>
</li>
<li class="level1"><div class="li"> Pia Viviani (<a class="wikilink2" href="/wiki/user:pia" rel="nofollow" title="user:pia">pia</a>)</div>
</li>
<li class="level1"><div class="li"> Beat Estermann (<a class="wikilink1" href="/wiki/user:beat_estermann" title="user:beat_estermann">beat_estermann</a>)</div>
</li>
</ul>
<div class="tags"><span>
<a class="wikilink1 tag label label-default mx-1" href="/wiki/status:concept?do=showtag&tag=status%3Aconcept" rel="tag" title="status:concept"><span class="iconify" data-icon="mdi:tag-text-outline"></span> concept</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:dev?do=showtag&tag=needs%3Adev" rel="tag" title="needs:dev"><span class="iconify" data-icon="mdi:tag-text-outline"></span> dev</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:design?do=showtag&tag=needs%3Adesign" rel="tag" title="needs:design"><span class="iconify" data-icon="mdi:tag-text-outline"></span> design</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/tag:glam?do=showtag&tag=glam" rel="tag" title="tag:glam"><span class="iconify" data-icon="mdi:tag-text-outline"></span> glam</a>
</span></div>
</div>
|
||
GLAM mix'n'hack 2019
|
2019-09-08 15:17:00
|
CoViMAS
|
CoViMAS
Collaborative Virtual Museum for All Senses (CoViMAS) is an extended virtual museum which engages all senses of visitors. It is a substantial upgrade and expansion of our award-winning Glamhack 2018 project “Walking around the Globe” http://make.opendata.ch/wiki/project:virtual_3d_exhibition which had the DBIS Group from the University of Basel team up with the ETH Library to introduce a prototype of an exhibition in Virtual Reality.
CoViMAS aims to provide a collaborative environment for multiple visitors in the virtual museum. This feature allows them to have a shared experience through different virtual reality devices.
Additionally, CoViMAS enriches the user experience in virtual space by providing physical objects which can be manipulated by the user in virtual space. Thanks to the mix'n'hack organizers and FabLab (https://fablab-sion.ch/), the user will be able to touch postcards, view them closely, and feel their texture.
To add the modern touch to the older pictures in the provided data, we add colorized images alongside the existing ones, to have a more lively look into the past using the pictures in the Virtual Museum.
Project Timeline
Day One
CoViMAS joins forces of different disciplines to form a group which contains Maker, content provider, developer(s), communicator, designer and user experience expert. Having different backgrounds and expertise made a great opportunity to explore different ideas and opportunities to develop the horizons of the project.
Two vital components of this project is Virtual Reality headset and Datasets which are going to be used. HTC Vive Pro VR headsets are converted to wireless mode after our last experiment which prove the freedom of movement without wires attached to the user, increases the user experience and feasibility of usage.
Our team content provider and designer spent invaluable amount of time to search for the representative postcards and audio which can be integrated in the virtual space and have the potential to improve the virtual reality experience by adding extra senses. This includes selecting postcards which can be touched and seen in virtual and non-virtual reality. Additionally, to improve the experience, and idea of hearing a sound which is related to the picture being viewed popped up. This audio should have a correlation with the picture being viewed and recreate the sensation of the picture environment for the user in virtual world.
To integrate the modern methods of Image manipulation through artificial Intelligence, we tried using Deep Learning method to colorize the gray-scale images of the “otografien aus dem Wallis von Charles Rieder”. The colored images allow the visitor get a more sensible feeling of the pictures he/she is viewing. The initial implementation of the algorithm showed the challenges we face, for example the faded parts of the pictures or scratched images could not very well be colorized.
Day Two
Although the VR exhibition is taken from our previous participation in Glamhack2018, the exhibition needed to be adjusted to the new content. We have designed the rooms to showcase the dataset “Postkarten aus dem Wallis (1890-1950)”. at this point, the selected postcards to be enriched with additional senses are sent to the Fablab, to create a haptic card and a feather pallet which is needed to be used alongside one postcard which represent a goose.
the fabricated elements of our exhibition get attached to a tracker which can be seen through VR glasses and this allows the user to be aware of location of the object, and sense it.
The Colorization improved through the day, by some alteration in training setup and the parameters used to tune the images. The results at this stage are relatively good.
And the VR exhibition hall is adjusted to be automatically load images from postcards and also the colorized images alongside their original color.
And late night, when finalizing the works for the next day, most of our stickers have changed status from “Implementation” phase to “Done” Phase!
Day Three
CoViMAS is getting to the final stage in the last day. The Room design is finished and the location of the images on the wall are determined. The tracker location is updated in the VR environment to represent the real location of the object. With this improvement the postcard can be touched while being viewed simultaneously.
Data
Fotografien aus dem Wallis von Charles Rieder https://opendata.swiss/dataset/photographs-of-valais-by-charles-rieder
Postkarten aus dem Wallis (1890-1950) https://opendata.swiss/dataset/postcards-from-valais-1890-1950
Team
Mahnaz Amiri Parian, PhD Student @ Databases and Information Systems Group
Silvan Heller, PhD Student @ Databases and Information Systems Group
Florian Spiess, MsC @ Computer Science Uni Basel
Fabian
Stef
Florence
concept,
dev,
design,
data,
expert,
glam
|
<h2 class="sectionedit1 page-header pb-3 mb-4 mt-5" id="covimas">CoViMAS</h2>
<div class="level2">
<p>
<video class="mediaright" controls="controls" height="240" title=" " width="320">
<source src="/wiki/_media/project:covimas.mp4" type="video/mp4"/>
<a class="media mediafile mf_mp4" href="/wiki/_media/project:covimas.mp4?cache=" title="project:covimas.mp4 (133.3MB)"> </a></video>
Collaborative Virtual Museum for All Senses (CoViMAS) is an extended virtual museum which engages all senses of visitors. It is a substantial upgrade and expansion of our award-winning Glamhack 2018 project Walking around the Globe <a class="urlextern" href="http://make.opendata.ch/wiki/project:virtual_3d_exhibition" rel="nofollow" title="http://make.opendata.ch/wiki/project:virtual_3d_exhibition">http://make.opendata.ch/wiki/project:virtual_3d_exhibition</a> which had the DBIS Group from the University of Basel team up with the ETH Library to introduce a prototype of an exhibition in Virtual Reality.
</p>
<p>
CoViMAS aims to provide a collaborative environment for multiple visitors in the virtual museum. This feature allows them to have a shared experience through different virtual reality devices.
</p>
<p>
Additionally, CoViMAS enriches the user experience in virtual space by providing physical objects which can be manipulated by the user in virtual space. Thanks to the mix'n'hack organizers and FabLab (<a class="urlextern" href="https://fablab-sion.ch/" rel="nofollow" title="https://fablab-sion.ch/">https://fablab-sion.ch/</a>), the user will be able to touch postcards, view them closely, and feel their texture.
</p>
<p>
To add the modern touch to the older pictures in the provided data, we add colorized images alongside the existing ones, to have a more lively look into the past using the pictures in the Virtual Museum.
</p>
</div>
<h2 class="sectionedit2 page-header pb-3 mb-4 mt-5" id="project_timeline">Project Timeline</h2>
<div class="level2">
</div>
<h3 class="sectionedit3 page-header pb-3 mb-4 mt-5" id="day_one">Day One</h3>
<div class="level3">
<p>
CoViMAS joins forces of different disciplines to form a group which contains Maker, content provider, developer(s), communicator, designer and user experience expert. Having different backgrounds and expertise made a great opportunity to explore different ideas and opportunities to develop the horizons of the project.
</p>
<p>
Two vital components of this project is Virtual Reality headset and Datasets which are going to be used. HTC Vive Pro VR headsets are converted to wireless mode after our last experiment which prove the freedom of movement without wires attached to the user, increases the user experience and feasibility of usage.
</p>
<p>
Our team content provider and designer spent invaluable amount of time to search for the representative postcards and audio which can be integrated in the virtual space and have the potential to improve the virtual reality experience by adding extra senses. This includes selecting postcards which can be touched and seen in virtual and non-virtual reality. Additionally, to improve the experience, and idea of hearing a sound which is related to the picture being viewed popped up. This audio should have a correlation with the picture being viewed and recreate the sensation of the picture environment for the user in virtual world.
</p>
<p>
To integrate the modern methods of Image manipulation through artificial Intelligence, we tried using Deep Learning method to colorize the gray-scale images of the otografien aus dem Wallis von Charles Rieder. The colored images allow the visitor get a more sensible feeling of the pictures he/she is viewing. The initial implementation of the algorithm showed the challenges we face, for example the faded parts of the pictures or scratched images could not very well be colorized.
</p>
<p>
<a class="media" href="/wiki/_detail/project:img_20190908_112033_1_.jpg?id=project%3Acovimas" title="project:img_20190908_112033_1_.jpg"><img alt="img_20190908_112033_1_.jpg" class="media img-responsive" src="/wiki/_media/project:img_20190908_112033_1_.jpg?w=500&tok=c0350b" title="img_20190908_112033_1_.jpg" width="500"/></a> <a class="media" href="/wiki/_detail/project:test.png?id=project%3Acovimas" title="project:test.png"><img alt="" class="medialeft img-responsive" src="/wiki/_media/project:test.png?w=376&tok=fb53d7" width="376"/></a>
</p>
</div>
<h3 class="sectionedit4 page-header pb-3 mb-4 mt-5" id="day_two">Day Two</h3>
<div class="level3">
<p>
Although the VR exhibition is taken from our previous participation in Glamhack2018, the exhibition needed to be adjusted to the new content. We have designed the rooms to showcase the dataset Postkarten aus dem Wallis (1890-1950). at this point, the selected postcards to be enriched with additional senses are sent to the Fablab, to create a haptic card and a feather pallet which is needed to be used alongside one postcard which represent a goose.
</p>
<p>
<a class="media" href="/wiki/_detail/project:img_20190907_174307.jpg?id=project%3Acovimas" title="project:img_20190907_174307.jpg"><img alt="" class="medialeft img-responsive" src="/wiki/_media/project:img_20190907_174307.jpg?w=280&tok=071b99" width="280"/></a> <a class="media" href="/wiki/_detail/project:img_20190907_174005.jpg?id=project%3Acovimas" title="project:img_20190907_174005.jpg"><img alt="" class="media img-responsive" src="/wiki/_media/project:img_20190907_174005.jpg?w=500&tok=8378a3" width="500"/></a>
</p>
<p>
the fabricated elements of our exhibition get attached to a tracker which can be seen through VR glasses and this allows the user to be aware of location of the object, and sense it.
</p>
<p>
The Colorization improved through the day, by some alteration in training setup and the parameters used to tune the images. The results at this stage are relatively good.
</p>
<p>
<a class="media" href="/wiki/_detail/project:111.png?id=project%3Acovimas" title="project:111.png"><img alt="" class="media img-responsive" src="/wiki/_media/project:111.png?w=300&tok=9b2747" width="300"/></a><a class="media" href="/wiki/_detail/project:2.png?id=project%3Acovimas" title="project:2.png"><img alt="" class="media img-responsive" src="/wiki/_media/project:2.png?w=300&tok=356cb7" width="300"/></a><a class="media" href="/wiki/_detail/project:055ph-00073.jpg_out.png?id=project%3Acovimas" title="project:055ph-00073.jpg_out.png"><img alt="" class="media img-responsive" src="/wiki/_media/project:055ph-00073.jpg_out.png?w=300&tok=66155f" width="300"/></a>
</p>
<p>
And the VR exhibition hall is adjusted to be automatically load images from postcards and also the colorized images alongside their original color.
</p>
<p>
And late night, when finalizing the works for the next day, most of our stickers have changed status from Implementation phase to Done Phase!
</p>
<p>
<a class="media" href="/wiki/_detail/project:img_20190907_201637.jpg?id=project%3Acovimas" title="project:img_20190907_201637.jpg"><img alt="" class="mediacenter img-responsive" src="/wiki/_media/project:img_20190907_201637.jpg?w=500&tok=157a99" width="500"/></a>
</p>
</div>
<h3 class="sectionedit5 page-header pb-3 mb-4 mt-5" id="day_three">Day Three</h3>
<div class="level3">
<p>
CoViMAS is getting to the final stage in the last day. The Room design is finished and the location of the images on the wall are determined. The tracker location is updated in the VR environment to represent the real location of the object. With this improvement the postcard can be touched while being viewed simultaneously.
</p>
<p>
<a class="media" href="/wiki/_detail/project:ce0da5f3-16e6-411c-aee1-86aba5827ee3.jpeg?id=project%3Acovimas" title="project:ce0da5f3-16e6-411c-aee1-86aba5827ee3.jpeg"><img alt="" class="media img-responsive" src="/wiki/_media/project:ce0da5f3-16e6-411c-aee1-86aba5827ee3.jpeg?w=500&tok=003923" width="500"/></a> <a class="media" href="/wiki/_detail/project:feather.jpeg?id=project%3Acovimas" title="project:feather.jpeg"><img alt="" class="media img-responsive" src="/wiki/_media/project:feather.jpeg?w=360&tok=148270" width="360"/></a>
</p>
</div>
<h2 class="sectionedit6 page-header pb-3 mb-4 mt-5" id="data">Data</h2>
<div class="level2">
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> Fotografien aus dem Wallis von Charles Rieder <a class="urlextern" href="https://opendata.swiss/dataset/photographs-of-valais-by-charles-rieder" rel="nofollow" title="https://opendata.swiss/dataset/photographs-of-valais-by-charles-rieder">https://opendata.swiss/dataset/photographs-of-valais-by-charles-rieder</a></div>
</li>
<li class="level1"><div class="li"> Postkarten aus dem Wallis (1890-1950) <a class="urlextern" href="https://opendata.swiss/dataset/postcards-from-valais-1890-1950" rel="nofollow" title="https://opendata.swiss/dataset/postcards-from-valais-1890-1950">https://opendata.swiss/dataset/postcards-from-valais-1890-1950</a></div>
</li>
</ul>
</div>
<h2 class="sectionedit7 page-header pb-3 mb-4 mt-5" id="team">Team</h2>
<div class="level2">
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> Mahnaz Amiri Parian, PhD Student @ <a class="urlextern" href="https://dbis.dmi.unibas.ch" rel="nofollow" title="https://dbis.dmi.unibas.ch">Databases and Information Systems Group</a></div>
</li>
<li class="level1"><div class="li"> Silvan Heller, PhD Student @ <a class="urlextern" href="https://dbis.dmi.unibas.ch" rel="nofollow" title="https://dbis.dmi.unibas.ch">Databases and Information Systems Group</a></div>
</li>
<li class="level1"><div class="li"> Florian Spiess, MsC @ <a class="urlextern" href="https://dmi.unibas.ch/de/forschung/informatik/" rel="nofollow" title="https://dmi.unibas.ch/de/forschung/informatik/">Computer Science Uni Basel</a></div>
</li>
<li class="level1"><div class="li"> Fabian </div>
</li>
<li class="level1"><div class="li"> Stef</div>
</li>
<li class="level1"><div class="li"> Florence</div>
</li>
</ul>
<div class="tags"><span>
<a class="wikilink1 tag label label-default mx-1" href="/wiki/status:concept?do=showtag&tag=status%3Aconcept" rel="tag" title="status:concept"><span class="iconify" data-icon="mdi:tag-text-outline"></span> concept</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:dev?do=showtag&tag=needs%3Adev" rel="tag" title="needs:dev"><span class="iconify" data-icon="mdi:tag-text-outline"></span> dev</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:design?do=showtag&tag=needs%3Adesign" rel="tag" title="needs:design"><span class="iconify" data-icon="mdi:tag-text-outline"></span> design</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:data?do=showtag&tag=needs%3Adata" rel="tag" title="needs:data"><span class="iconify" data-icon="mdi:tag-text-outline"></span> data</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:expert?do=showtag&tag=needs%3Aexpert" rel="tag" title="needs:expert"><span class="iconify" data-icon="mdi:tag-text-outline"></span> expert</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/tag:glam?do=showtag&tag=glam" rel="tag" title="tag:glam"><span class="iconify" data-icon="mdi:tag-text-outline"></span> glam</a>
</span></div>
</div>
|
||
GLAM mix'n'hack 2019
|
2019-10-31 17:38:00
|
Opera Forever
|
Opera Forever
Opera Forever is an online collaboration platform and social networking site to collectively explore large amounts of opera recordings.
The platform allows users to tag audio sequences with various types of semantics, such as personal preference, emotional reaction, specific musical features, technical issues, etc. Through the analysis of personal preference and/or emotional reaction to specific audio sequences, a characterization of personal listening tastes will be possible, and people with similar (or very dissimilar) tastes can be matched. The platform will also contain a recommendation system based on preference information and/or keyword search.
Background: The Bern University of the Arts has inherited a large collection of about 15'000 hours of bootleg live opera recordings. Most of these recordings are unique, and many individual recordings rather long (up to 3-4 hours), hence the idea of segmenting the recordings so as to allow for the creation of semantical links between segments to enhance the possibilities of collectively exploring the collection. In our fast-moving times, drawing on
Core Idea: Users engaging in “active” listening leave semantic traces behind that can be used as a resource to guide further exploration of the collection, both by themselves and by third parties. The approach can be used for an entire spectrum of users, ranging from occasional opera listeners, through opera amateurs, to interpretation researchers. The tool can be used as a collaborative tagging platform among research teams or within citizen science settings. By putting the focus on the listeners and their personal reaction to the audio segments, the perspective of analysis can be switched to the user, e.g. by creating typologies or clusterings of listening tastes or by using the approach for match-making in social settings.
Demo Video
Opera Forever
Proof of Concept
Opera Forever (demo application)
A first proof of concept was developed at the Swiss Open Cultural Data Hackathon 2019 in Sion and contains the following features:
The user can browse through and listen to the recordings of different performances of the same opera.
The individual recordings are segmented into their different parts.
By using simple swiping gestures, the user can navigate between the individual segments of the same recording (swiping left or right) or between different recordings (swiping up or down) - the swiping is not yet implemented, but you can click on the respective arrows.
For each segment, the user can indicate to what extent they like that particular segment (1 to 5 stars). - not implemented yet
Based on this information, individual preference lists and collective hit-parades are generated. - not implemented yet
Also, it will be possible to cluster users according to their musical taste, which opens up the possibility to match users based on their musical taste or to build recommendation systems. not implemented yet
Data
Metadata: Ehrenreich Collection Database
Audio Files: Digitized audio recordings from the Ehrenreich Collection (currently not available online; many of them presenting copyright issues)
Photographs of artists: Taken from a variety of websites; most of them presenting copyright issues.
Documentation
Google Doc with Notes
Team
Birk Weiberg (birk)
Dominik Sievi (dsievi)
Beat Estermann (beat_estermann)
Pia Viviani (pia)
Oleg Lavrovsky (loleg)
Kenny Floria (paulkc)
Contact: beat.estermann@bfh.ch
concept,
dev,
design,
glam
|
<h2 class="sectionedit1 page-header pb-3 mb-4 mt-5" id="opera_forever">Opera Forever</h2>
<div class="level2">
<p>
<strong>Opera Forever</strong> is an online collaboration platform and social networking site to collectively explore large amounts of opera recordings.
</p>
<p>
The platform allows users to tag audio sequences with various types of semantics, such as personal preference, emotional reaction, specific musical features, technical issues, etc. Through the analysis of personal preference and/or emotional reaction to specific audio sequences, a characterization of personal listening tastes will be possible, and people with similar (or very dissimilar) tastes can be matched. The platform will also contain a recommendation system based on preference information and/or keyword search.
</p>
<p>
<strong>Background:</strong> The Bern University of the Arts has inherited a large collection of about 15'000 hours of bootleg live opera recordings. Most of these recordings are unique, and many individual recordings rather long (up to 3-4 hours), hence the idea of segmenting the recordings so as to allow for the creation of semantical links between segments to enhance the possibilities of collectively exploring the collection. In our fast-moving times, drawing on
</p>
<p>
<strong>Core Idea:</strong> Users engaging in active listening leave semantic traces behind that can be used as a resource to guide further exploration of the collection, both by themselves and by third parties. The approach can be used for an entire spectrum of users, ranging from occasional opera listeners, through opera amateurs, to interpretation researchers. The tool can be used as a collaborative tagging platform among research teams or within citizen science settings. By putting the focus on the listeners and their personal reaction to the audio segments, the perspective of analysis can be switched to the user, e.g. by creating typologies or clusterings of listening tastes or by using the approach for match-making in social settings.
</p>
</div>
<h3 class="sectionedit2 page-header pb-3 mb-4 mt-5" id="demo_video">Demo Video</h3>
<div class="level3">
<iframe allowfullscreen="" class="vshare__none" frameborder="0" height="293" scrolling="no" src="//player.vimeo.com/video/358615682" width="520">Opera Forever</iframe>
</div>
<h3 class="sectionedit3 page-header pb-3 mb-4 mt-5" id="proof_of_concept">Proof of Concept</h3>
<div class="level3">
<p>
<a class="urlextern" href="https://opera.now.sh" rel="nofollow" title="https://opera.now.sh">Opera Forever (demo application)</a>
</p>
<p>
A first proof of concept was developed at the Swiss Open Cultural Data Hackathon 2019 in Sion and contains the following features:
</p>
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> The user can browse through and listen to the recordings of different performances of the same opera.</div>
</li>
<li class="level1"><div class="li"> The individual recordings are segmented into their different parts.</div>
</li>
<li class="level1"><div class="li"> By using simple swiping gestures, the user can navigate between the individual segments of the same recording (swiping left or right) or between different recordings (swiping up or down) - <em>the swiping is not yet implemented, but you can click on the respective arrows.</em> </div>
</li>
<li class="level1"><div class="li"> For each segment, the user can indicate to what extent they like that particular segment (1 to 5 stars). - <em>not implemented yet </em></div>
</li>
<li class="level1"><div class="li"> Based on this information, individual preference lists and collective hit-parades are generated. - <em>not implemented yet </em></div>
</li>
<li class="level1"><div class="li"> Also, it will be possible to cluster users according to their musical taste, which opens up the possibility to match users based on their musical taste or to build recommendation systems. <em>not implemented yet</em></div>
</li>
</ul>
</div>
<h2 class="sectionedit4 page-header pb-3 mb-4 mt-5" id="data">Data</h2>
<div class="level2">
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> Metadata: <a class="urlextern" href="https://old.datahub.io/dataset/ehrenreich-collection-database" rel="nofollow" title="https://old.datahub.io/dataset/ehrenreich-collection-database">Ehrenreich Collection Database</a></div>
</li>
<li class="level1"><div class="li"> Audio Files: Digitized audio recordings from the Ehrenreich Collection (currently not available online; many of them presenting copyright issues)</div>
</li>
<li class="level1"><div class="li"> Photographs of artists: Taken from a variety of websites; most of them presenting copyright issues.</div>
</li>
</ul>
</div>
<h3 class="sectionedit5 page-header pb-3 mb-4 mt-5" id="documentation">Documentation</h3>
<div class="level3">
<p>
<a class="urlextern" href="https://docs.google.com/document/d/1C1plxqo_lOGWNXj5uEcAydmx_ZePvUio-UY5XOKvBlE/edit" rel="nofollow" title="https://docs.google.com/document/d/1C1plxqo_lOGWNXj5uEcAydmx_ZePvUio-UY5XOKvBlE/edit">Google Doc with Notes</a>
</p>
</div>
<h2 class="sectionedit6 page-header pb-3 mb-4 mt-5" id="team">Team</h2>
<div class="level2">
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> Birk Weiberg (<a class="wikilink2" href="/wiki/user:birk" rel="nofollow" title="user:birk">birk</a>)</div>
</li>
<li class="level1"><div class="li"> Dominik Sievi (<a class="wikilink2" href="/wiki/user:dsievi" rel="nofollow" title="user:dsievi">dsievi</a>)</div>
</li>
<li class="level1"><div class="li"> Beat Estermann (<a class="wikilink1" href="/wiki/user:beat_estermann" title="user:beat_estermann">beat_estermann</a>)</div>
</li>
<li class="level1"><div class="li"> Pia Viviani (<a class="wikilink2" href="/wiki/user:pia" rel="nofollow" title="user:pia">pia</a>)</div>
</li>
<li class="level1"><div class="li"> Oleg Lavrovsky (<a class="wikilink1" href="/wiki/user:loleg" title="user:loleg">loleg</a>)</div>
</li>
<li class="level1"><div class="li"> Kenny Floria (<a class="wikilink2" href="/wiki/user:paulkc" rel="nofollow" title="user:paulkc">paulkc</a>)</div>
</li>
</ul>
<p>
Contact: beat.estermann@bfh.ch
</p>
<div class="tags"><span>
<a class="wikilink1 tag label label-default mx-1" href="/wiki/status:concept?do=showtag&tag=status%3Aconcept" rel="tag" title="status:concept"><span class="iconify" data-icon="mdi:tag-text-outline"></span> concept</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:dev?do=showtag&tag=needs%3Adev" rel="tag" title="needs:dev"><span class="iconify" data-icon="mdi:tag-text-outline"></span> dev</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:design?do=showtag&tag=needs%3Adesign" rel="tag" title="needs:design"><span class="iconify" data-icon="mdi:tag-text-outline"></span> design</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/tag:glam?do=showtag&tag=glam" rel="tag" title="tag:glam"><span class="iconify" data-icon="mdi:tag-text-outline"></span> glam</a>
</span></div>
</div>
|
||
GLAM mix'n'hack 2019
|
2019-09-22 11:34:00
|
Time Gazer
|
TimeGazer
Welcome to TimeGazer: A time-traveling photo booth enabling you to send greetings from historical postcards.
Based on the wonderful “Postcards from Valais (1890 - 1950)” dataset, consisting of nearly 4000 historic postcards of Valais, we create a prototype for Mixed-Reality photo booth.
Choose a historic postcard as a background and a person will be style-transferred virtually onto the postcard.
Photobomb a historical postcard
A photo booth for time traveling
send greetings from the poster
virtually enter the historical postcard
Mockup of the process.
Based on the wonderful “Postcards from Valais (1890 - 1950)” dataset,
consisting of nearly 4000 historic postcards of Valais, we create a prototype for Mixed-Reality photo booth.
One can choose a historic postcard as a background and a person will be style-transferred virtually onto the postcard.
Potentially with VR-trackerified things to add choosable objects virtually into the scene.
Technology
This project is roughly based on a project from last year, which resulted in an active research project at Databases and Information Systems group of the University of Basel: VIRTUE.
Hence, we use a similar setup:
HTC Vive Pro VR-Headset
Unity
Style Transfer - Styling Images with Convolutional Neural Networks
Results
Website (password : Valais)
Video
Instagram account with the pictures taken
Project
Blue screen
Printer box
Standard box on MakerCase:
Modified for the input of paper and output of postcard:
The SVG and DXF box project files.
Data
Quote from the data introduction page:
A collection of 3900 postcards from Valais. Some highlights are churches, cable cars, landscapes and traditional costumes.
Source: Musées cantonaux du Valais – Musée d’histoire
https://opendata.swiss/en/dataset/postcards-from-valais-1890-1950
Team
Dr. Ivan Giangreco
Dr. Johann Roduit
Lionel Walter
Loris Sauter
Luca Palli
Ralph Gasser
concept,
glam,
tourism
|
<h1 class="sectionedit1 page-header pb-3 mb-4 mt-5" id="timegazer">TimeGazer</h1>
<div class="level1">
<p>
Welcome to TimeGazer: A time-traveling photo booth enabling you to send greetings from historical postcards.
</p>
<p>
Based on the wonderful Postcards from Valais (1890 - 1950) dataset, consisting of nearly 4000 historic postcards of Valais, we create a prototype for Mixed-Reality photo booth.
</p>
<p>
Choose a historic postcard as a background and a person will be style-transferred virtually onto the postcard.
</p>
</div>
<h2 class="sectionedit2 page-header pb-3 mb-4 mt-5" id="photobomb_a_historical_postcard">Photobomb a historical postcard</h2>
<div class="level2">
<blockquote><div class="no">
A photo booth for time traveling<br>
send greetings from the poster<br>
virtually enter the historical postcard</br></br></div></blockquote>
<p>
<a class="media" href="/wiki/_detail/project:photo_2019-09-06_17-58-57.jpg?id=project%3Atime_gazer" title="project:photo_2019-09-06_17-58-57.jpg"><img alt="" class="mediacenter img-responsive" src="/wiki/_media/project:photo_2019-09-06_17-58-57.jpg"/></a>
Mockup of the process.
</p>
<p>
Based on the wonderful Postcards from Valais (1890 - 1950) <a class="urlextern" href="https://opendata.swiss/en/dataset/postcards-from-valais-1890-1950" rel="nofollow" title="https://opendata.swiss/en/dataset/postcards-from-valais-1890-1950">dataset</a>,
consisting of nearly 4000 historic postcards of Valais, we create a prototype for Mixed-Reality photo booth.
One can choose a historic postcard as a background and a person will be style-transferred virtually onto the postcard.
</p>
<p>
Potentially with VR-trackerified things to add choosable objects virtually into the scene.
</p>
</div>
<h2 class="sectionedit3 page-header pb-3 mb-4 mt-5" id="technology">Technology</h2>
<div class="level2">
<p>
This project is roughly based on a <a class="urlextern" href="http://make.opendata.ch/wiki/project:virtual_3d_exhibition" rel="nofollow" title="http://make.opendata.ch/wiki/project:virtual_3d_exhibition">project</a> from last year, which resulted in an active research project at <a class="urlextern" href="https://dbis.dmi.unibas.ch/" rel="nofollow" title="https://dbis.dmi.unibas.ch/">Databases and Information Systems</a> group of the <a class="urlextern" href="https://www.unibas.ch/de" rel="nofollow" title="https://www.unibas.ch/de">University of Basel</a>: <a class="urlextern" href="https://dbis.dmi.unibas.ch/research/projects/virtue/" rel="nofollow" title="https://dbis.dmi.unibas.ch/research/projects/virtue/">VIRTUE</a>.
Hence, we use a similar setup:
</p>
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> <a class="urlextern" href="https://www.vive.com/eu/product/vive-pro/" rel="nofollow" title="https://www.vive.com/eu/product/vive-pro/">HTC Vive Pro</a> VR-Headset</div>
</li>
<li class="level1"><div class="li"> <a class="urlextern" href="https://unity3d.com" rel="nofollow" title="https://unity3d.com">Unity</a></div>
</li>
<li class="level1"><div class="li"> <a class="urlextern" href="https://towardsdatascience.com/style-transfer-styling-images-with-convolutional-neural-networks-7d215b58f461" rel="nofollow" title="https://towardsdatascience.com/style-transfer-styling-images-with-convolutional-neural-networks-7d215b58f461">Style Transfer - Styling Images with Convolutional Neural Networks</a></div>
</li>
</ul>
</div>
<h2 class="sectionedit4 page-header pb-3 mb-4 mt-5" id="results">Results</h2>
<div class="level2">
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> <a class="urlextern" href="https://time-gazer.squarespace.com/" rel="nofollow" title="https://time-gazer.squarespace.com/">Website</a> (password : Valais)</div>
</li>
<li class="level1"><div class="li"> <a class="urlextern" href="https://vimeo.com/358591907" rel="nofollow" title="https://vimeo.com/358591907">Video</a></div>
</li>
<li class="level1"><div class="li"> <a class="urlextern" href="https://www.instagram.com/timegazervalais/" rel="nofollow" title="https://www.instagram.com/timegazervalais/">Instagram account with the pictures taken</a></div>
</li>
</ul>
</div>
<h2 class="sectionedit5 page-header pb-3 mb-4 mt-5" id="project">Project</h2>
<div class="level2">
</div>
<h3 class="sectionedit6 page-header pb-3 mb-4 mt-5" id="blue_screen">Blue screen</h3>
<div class="level3">
<p>
<a class="media" href="/wiki/_detail/project:bluescreen-project.png?id=project%3Atime_gazer" title="project:bluescreen-project.png"><img alt="" class="mediacenter img-responsive" src="/wiki/_media/project:bluescreen-project.png"/></a>
</p>
</div>
<h3 class="sectionedit7 page-header pb-3 mb-4 mt-5" id="printer_box">Printer box</h3>
<div class="level3">
<p>
Standard box on <a class="urlextern" href="https://www.makercase.com/" rel="nofollow" title="https://www.makercase.com/">MakerCase</a>:
<a class="media" href="/wiki/_detail/project:printer-box_makercase-optimized.png?id=project%3Atime_gazer" title="project:printer-box_makercase-optimized.png"><img alt="" class="mediacenter img-responsive" src="/wiki/_media/project:printer-box_makercase-optimized.png"/></a>
</p>
<p>
Modified for the input of paper and output of postcard:
<a class="media" href="/wiki/_detail/project:printer-box-full.png?id=project%3Atime_gazer" title="project:printer-box-full.png"><img alt="" class="mediacenter img-responsive" src="/wiki/_media/project:printer-box-full.png"/></a>
</p>
<p>
The <a class="urlextern" href="https://drive.google.com/open?id=1ul3G1U09DEGw6OYKdTABR_meCK4bV1Cx" rel="nofollow" title="https://drive.google.com/open?id=1ul3G1U09DEGw6OYKdTABR_meCK4bV1Cx">SVG</a> and <a class="urlextern" href="https://drive.google.com/open?id=1Fc4NClhAYqUZBTUfILFUN7s1CKwjG2M7" rel="nofollow" title="https://drive.google.com/open?id=1Fc4NClhAYqUZBTUfILFUN7s1CKwjG2M7">DXF</a> box project files.
</p>
</div>
<h2 class="sectionedit8 page-header pb-3 mb-4 mt-5" id="data">Data</h2>
<div class="level2">
<p>
Quote from the data introduction page:
</p>
<blockquote><div class="no">
A collection of 3900 postcards from Valais. Some highlights are churches, cable cars, landscapes and traditional costumes.<br>
Source: Muses cantonaux du Valais Muse dhistoire </br></div></blockquote>
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> <a class="urlextern" href="https://opendata.swiss/en/dataset/postcards-from-valais-1890-1950" rel="nofollow" title="https://opendata.swiss/en/dataset/postcards-from-valais-1890-1950">https://opendata.swiss/en/dataset/postcards-from-valais-1890-1950</a></div>
</li>
</ul>
</div>
<h2 class="sectionedit9 page-header pb-3 mb-4 mt-5" id="team">Team</h2>
<div class="level2">
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> Dr. Ivan Giangreco</div>
</li>
<li class="level1"><div class="li"> Dr. Johann Roduit</div>
</li>
<li class="level1"><div class="li"> Lionel Walter</div>
</li>
<li class="level1"><div class="li"> Loris Sauter</div>
</li>
<li class="level1"><div class="li"> <a class="wikilink2" href="/wiki/user:lpalli" rel="nofollow" title="user:lpalli">Luca</a> Palli</div>
</li>
<li class="level1"><div class="li"> Ralph Gasser</div>
</li>
</ul>
<div class="tags"><span>
<a class="wikilink1 tag label label-default mx-1" href="/wiki/status:concept?do=showtag&tag=status%3Aconcept" rel="tag" title="status:concept"><span class="iconify" data-icon="mdi:tag-text-outline"></span> concept</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/tag:glam?do=showtag&tag=glam" rel="tag" title="tag:glam"><span class="iconify" data-icon="mdi:tag-text-outline"></span> glam</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/tag:tourism?do=showtag&tag=tourism" rel="tag" title="tag:tourism"><span class="iconify" data-icon="mdi:tag-text-outline"></span> tourism</a>
</span></div>
</div>
|
||
GLAM mix'n'hack 2019
|
2019-09-24 01:34:00
|
Human Name Creativity
|
Human Name Creativity
Following the last years project about dog names Dog Name Creativity Survey of New York City Dog Name Creativity Survey of New York City. The focus this year was on human names. The swiss post provides datasets with the top 5 names from each postal code. The goal was again to create a creativity index. But this year, under the motto of user involvement with the option to enter your own name, set the language your name is from and to see yourself in the ranking. The datasets are not perfect for this task, because they don’t contain all the names, only the top 5 per postal code. So the user has a high chance to get a “score-buff” for uniqueness. Nevertheless it is a fun project.
Unfortunately it wasn’t finished until the end of the Hackathon, no UI, but here's the last draft version of the code:
import pandas as pd
HaufeD_ = {"e":1,"n":2,"i":3,"r":4,"s":5,"-":5,"t":6,"a":7,"d":8,"h":9,"u":10,"l":11,"c":12,"g":13,"m":14,"o":15,"b":16, \
"w":17,"f":18,"k":19,"z":20,"v":21,"p":22,"ü":23,"ä":24,"ö":25,"j":26,"x":27,"y":28,"q":29}
HaufeF_ = {"e":1,"a":2,"s":3,"t":4,"i":5,"-":5,"r":6,"n":7,"u":8,"l":9,"o":10,"d":11,"m":12,"c":13,"p":14,"é":15,"v":16, \
"h":17,"g":18,"f":19,"b":20,"q":21,"j":22,"à":23,"x":24,"è":25,"ê":26,"z":27,"y":28,"k":29,"ô":29,"û":29,"w":29 \
,"â":29,"î":29,"ü":29,"ù":29,"ë":29,"Œ":29,"ç":29,"ï":29}
#HaufeI_ =
landics = {"d":HaufeD_,"f":HaufeF_}
def KreaWert(name_,lan):
dic = landics[lan]
name_ = str(name_)
wert_ = 0
for letter in str.lower(name_):
temp_ = 0
if letter in dic :
temp_ += dic[letter]
wert_ += temp_
else:
temp_ += 20
wert_ += temp_
try:
H_[name_]
wert_ = wert_* ((Hmax-H_[name_])/(Hmax-1)*5 + 0.2)
except KeyError as exception:
pass
if len(name_) < (DNL-2) or len(name_) > (DNL+2):
wert_ = wert_/10*8
return round(wert_,1)
df = pd.read_csv("vornamen_proplz.csv", sep = ",")
df["vorname"] = df["vorname"].str.strip()
insgeNamLan_ = 0
for name in df["vorname"]:
insgeNamLan_ += len(str(name))
#unkreativitätsrange = weniger als 4 / mehr als 8
DNL = round(insgeNamLan_ / len(df["vorname"]))
#Häufigkeit der Namen = H_
H_ = {}
counter = 0
for name in df["vorname"]:
if name in H_:
H_[name] += df["anzahl"][counter]
counter += 1
else:
H_[name] = df["anzahl"][counter]
counter +=1
sortH_ = sorted(H_.values())
Hmax = sortH_[len(sortH_)-1]
Hmin = sortH_[0]
lan = input("Set the language of your name (d/i/f): ")
name_ = input("What is your first name? ")
print(KreaWert(name_,lan))
Data
Vor- und Nachnamen pro Postleitzahl:
https://opendata.swiss/de/dataset/vornamen-pro-plz
https://opendata.swiss/de/dataset/nachnamen-pro-plz
Team
dsievi
concept,
dev,
design,
data,
expert
|
<h2 class="sectionedit1 page-header pb-3 mb-4 mt-5" id="human_name_creativity">Human Name Creativity</h2>
<div class="level2">
<p>
Following the last years project about dog names Dog Name Creativity Survey of New York City <a class="wikilink1" href="/wiki/project:dncsonyc" title="project:dncsonyc">Dog Name Creativity Survey of New York City</a>. The focus this year was on human names. The swiss post provides datasets with the top 5 names from each postal code. The goal was again to create a creativity index. But this year, under the motto of user involvement with the option to enter your own name, set the language your name is from and to see yourself in the ranking. The datasets are not perfect for this task, because they dont contain all the names, only the top 5 per postal code. So the user has a high chance to get a score-buff for uniqueness. Nevertheless it is a fun project.
</p>
<p>
Unfortunately it wasnt finished until the end of the Hackathon, no UI, but here's the last draft version of the code:
</p>
<pre class="code">import pandas as pd
HaufeD_ = {"e":1,"n":2,"i":3,"r":4,"s":5,"-":5,"t":6,"a":7,"d":8,"h":9,"u":10,"l":11,"c":12,"g":13,"m":14,"o":15,"b":16, \
"w":17,"f":18,"k":19,"z":20,"v":21,"p":22,"":23,"":24,"":25,"j":26,"x":27,"y":28,"q":29}
HaufeF_ = {"e":1,"a":2,"s":3,"t":4,"i":5,"-":5,"r":6,"n":7,"u":8,"l":9,"o":10,"d":11,"m":12,"c":13,"p":14,"":15,"v":16, \
"h":17,"g":18,"f":19,"b":20,"q":21,"j":22,"":23,"x":24,"":25,"":26,"z":27,"y":28,"k":29,"":29,"":29,"w":29 \
,"":29,"":29,"":29,"":29,"":29,"":29,"":29,"":29}
#HaufeI_ =
landics = {"d":HaufeD_,"f":HaufeF_}
def KreaWert(name_,lan):
dic = landics[lan]
name_ = str(name_)
wert_ = 0
for letter in str.lower(name_):
temp_ = 0
if letter in dic :
temp_ += dic[letter]
wert_ += temp_
else:
temp_ += 20
wert_ += temp_
try:
H_[name_]
wert_ = wert_* ((Hmax-H_[name_])/(Hmax-1)*5 + 0.2)
except KeyError as exception:
pass
if len(name_) < (DNL-2) or len(name_) > (DNL+2):
wert_ = wert_/10*8
return round(wert_,1)
df = pd.read_csv("vornamen_proplz.csv", sep = ",")
df["vorname"] = df["vorname"].str.strip()
insgeNamLan_ = 0
for name in df["vorname"]:
insgeNamLan_ += len(str(name))
#unkreativittsrange = weniger als 4 / mehr als 8
DNL = round(insgeNamLan_ / len(df["vorname"]))
#Hufigkeit der Namen = H_
H_ = {}
counter = 0
for name in df["vorname"]:
if name in H_:
H_[name] += df["anzahl"][counter]
counter += 1
else:
H_[name] = df["anzahl"][counter]
counter +=1
sortH_ = sorted(H_.values())
Hmax = sortH_[len(sortH_)-1]
Hmin = sortH_[0]
lan = input("Set the language of your name (d/i/f): ")
name_ = input("What is your first name? ")
print(KreaWert(name_,lan))</pre>
</div>
<h2 class="sectionedit2 page-header pb-3 mb-4 mt-5" id="data">Data</h2>
<div class="level2">
<p>
Vor- und Nachnamen pro Postleitzahl:
</p>
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> <a class="urlextern" href="https://opendata.swiss/de/dataset/vornamen-pro-plz" rel="nofollow" title="https://opendata.swiss/de/dataset/vornamen-pro-plz">https://opendata.swiss/de/dataset/vornamen-pro-plz</a></div>
</li>
<li class="level1"><div class="li"> <a class="urlextern" href="https://opendata.swiss/de/dataset/nachnamen-pro-plz" rel="nofollow" title="https://opendata.swiss/de/dataset/nachnamen-pro-plz">https://opendata.swiss/de/dataset/nachnamen-pro-plz</a></div>
</li>
</ul>
</div>
<h2 class="sectionedit3 page-header pb-3 mb-4 mt-5" id="team">Team</h2>
<div class="level2">
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> <a class="wikilink2" href="/wiki/user:dsievi" rel="nofollow" title="user:dsievi">dsievi</a></div>
</li>
</ul>
<div class="tags"><span>
<a class="wikilink1 tag label label-default mx-1" href="/wiki/status:concept?do=showtag&tag=status%3Aconcept" rel="tag" title="status:concept"><span class="iconify" data-icon="mdi:tag-text-outline"></span> concept</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:dev?do=showtag&tag=needs%3Adev" rel="tag" title="needs:dev"><span class="iconify" data-icon="mdi:tag-text-outline"></span> dev</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:design?do=showtag&tag=needs%3Adesign" rel="tag" title="needs:design"><span class="iconify" data-icon="mdi:tag-text-outline"></span> design</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:data?do=showtag&tag=needs%3Adata" rel="tag" title="needs:data"><span class="iconify" data-icon="mdi:tag-text-outline"></span> data</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:expert?do=showtag&tag=needs%3Aexpert" rel="tag" title="needs:expert"><span class="iconify" data-icon="mdi:tag-text-outline"></span> expert</a>
</span></div>
</div>
|
||
Open Cultural Data Hackathon
|
2018-10-28 12:59:00
|
Art on Paper Gallery
|
Art on Paper Gallery
We develop a gallery app for browsing art works on paper. For the prototype we use a dataset sample delivered from the Collection Online of the Graphische Sammlung ETH Zurich. In our app the user can find the digital images of the prints and drawings, gets metadata information about the different techniques and other details. The app invites the user to browse from one art work to the other, following different paths such as the same technique, the same artist, the same subject and so on.
Challenge
To use a Collection Online properly the user needs previous knowledge. Many people just love art and are interested but no experts.
User
Especially this group of people is invited to explore our large collection in an interactive journey.
Goals
The Art on Paper Gallery App enables the user to jump from one artwork to another in an associative way. It offers suggestions following different categories, such as the artist, technique, etc.
It allows social interaction with the possibility to like, share and comment an artwork
Artworks can be arranged according to relevance, number of clicks etc.
This again allows Collections or Museums to evaluate the user interests and trends
Code
The code is available at the following link: https://github.com/DominikStefancik/Art-on-Paper-Gallery-App.
Example of a possible Design
Data
Graphische Sammlung ETH Zurich, Collection Online, sample dataset with focus on different techniques of printmaking and drawing
Team
Dominik Štefančik, Software Engineer
Graphische Sammlung ETH Zurich, Susanne Pollack, Ann-Kathrin Seyffer
concept,
dev,
design,
data,
expert,
glam
|
<h2 class="sectionedit1 page-header pb-3 mb-4 mt-5" id="art_on_paper_gallery">Art on Paper Gallery</h2>
<div class="level2">
<p>
We develop a gallery app for browsing art works on paper. For the prototype we use a dataset sample delivered from the Collection Online of the Graphische Sammlung ETH Zurich. In our app the user can find the digital images of the prints and drawings, gets metadata information about the different techniques and other details. The app invites the user to browse from one art work to the other, following different paths such as the same technique, the same artist, the same subject and so on.
</p>
<p>
<strong>Challenge</strong>
</p>
<p>
To use a Collection Online properly the user needs previous knowledge. Many people just love art and are interested but no experts.
</p>
<p>
<strong>User</strong>
</p>
<p>
Especially this group of people is invited to explore our large collection in an interactive journey.
</p>
<p>
<strong>Goals</strong>
</p>
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> The Art on Paper Gallery App enables the user to jump from one artwork to another in an associative way. It offers suggestions following different categories, such as the artist, technique, etc. </div>
</li>
<li class="level1"><div class="li"> It allows social interaction with the possibility to like, share and comment an artwork</div>
</li>
<li class="level1"><div class="li"> Artworks can be arranged according to relevance, number of clicks etc.</div>
</li>
<li class="level1"><div class="li"> This again allows Collections or Museums to evaluate the user interests and trends</div>
</li>
</ul>
<p>
<strong>Code</strong>
</p>
<p>
The code is available at the following link: <a class="urlextern" href="https://github.com/DominikStefancik/Art-on-Paper-Gallery-App" rel="nofollow" title="https://github.com/DominikStefancik/Art-on-Paper-Gallery-App">https://github.com/DominikStefancik/Art-on-Paper-Gallery-App</a>.
</p>
<p>
<img alt="" class="mediacenter img-responsive" src="/wiki/_media/project:artonpapergallery.jpg"/>
</p>
<p>
<strong>Example of a possible Design</strong>
</p>
<p>
<img alt="" class="mediacenter img-responsive" src="/wiki/_media/project:artonpapergallery_start.jpg"/>
</p>
</div>
<h2 class="sectionedit2 page-header pb-3 mb-4 mt-5" id="data">Data</h2>
<div class="level2">
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> Graphische Sammlung ETH Zurich, Collection Online, sample dataset with focus on different techniques of printmaking and drawing</div>
</li>
</ul>
</div>
<h2 class="sectionedit3 page-header pb-3 mb-4 mt-5" id="team">Team</h2>
<div class="level2">
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> Dominik tefanik, Software Engineer</div>
</li>
<li class="level1"><div class="li"> Graphische Sammlung ETH Zurich, Susanne Pollack, Ann-Kathrin Seyffer</div>
</li>
</ul>
<div class="tags"><span>
<a class="wikilink1 tag label label-default mx-1" href="/wiki/status:concept?do=showtag&tag=status%3Aconcept" rel="tag" title="status:concept"><span class="iconify" data-icon="mdi:tag-text-outline"></span> concept</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:dev?do=showtag&tag=needs%3Adev" rel="tag" title="needs:dev"><span class="iconify" data-icon="mdi:tag-text-outline"></span> dev</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:design?do=showtag&tag=needs%3Adesign" rel="tag" title="needs:design"><span class="iconify" data-icon="mdi:tag-text-outline"></span> design</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:data?do=showtag&tag=needs%3Adata" rel="tag" title="needs:data"><span class="iconify" data-icon="mdi:tag-text-outline"></span> data</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:expert?do=showtag&tag=needs%3Aexpert" rel="tag" title="needs:expert"><span class="iconify" data-icon="mdi:tag-text-outline"></span> expert</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/tag:glam?do=showtag&tag=glam" rel="tag" title="tag:glam"><span class="iconify" data-icon="mdi:tag-text-outline"></span> glam</a>
</span></div>
</div>
|
||
Open Cultural Data Hackathon
|
2018-10-28 13:25:00
|
Artify
|
Artify
Explore the collection in a new, interessting way
You have to find objects, which have similar metadata and try to match them. The displayed objects are (semi-)randomly selected from a dataset (eg. from SNM). From the metadata of the starting object, the app will search for three other objects:
One which matches in 2+ metadata tags
One which matches in 1 metadata tag
One which is completly random.
If you choose the right one, the app will display three new objects accordingly to the way explained above.
Tags used from the datasets:
OBJEKT Klassifikation (x)
OBJEKT Webtext
OBJEKT Datierung (x)
OBJEKT → Herstellung (x)
OBJEKT → Herkunft (x)
(x) = used for matching
To Do
Datasets are too divers; in some cases there is no match. Datasets need to be prepared.
The tag “Klassifikation” is too specific
The tags “Herstellung” and “Herkunft” are often empty or not consistent.
The gaming aspect needs to be implemented
Use case
There are various cases, where the app could be used. It mainly depends on the datasets you use:
Explore hidden objects of a museum collection
Train students to identify art periods
Find connections between museums, which are not obvious (e.g. art and historical objects)
Data
Democase:
SNM
https://opendata.swiss/en/organization/schweizerisches-nationalmuseum-snm
–> Build with two sets: Technologie und Brauchtum / Kutschen & Schlitten & Fahrzeuge
Links
Github: https://github.com/zack17/ocdh2018
Tech. Demo: https://zack17.github.io/ocdh2018/
Design Demo (not functional): https://tempestas.ch/artify/
Team
Micha Reiser
Jacqueline Martinelli
Anastasiya Korotkova
Dominic Studer
Yaw Lam
concept,
dev,
design,
data,
expert,
glam
|
<h2 class="sectionedit1 page-header pb-3 mb-4 mt-5" id="artify">Artify</h2>
<div class="level2">
<p>
<img alt="" class="mediacenter img-responsive" src="/wiki/_media/project:titel3.jpg?w=250&tok=c81b3f" width="250"/>
</p>
<p>
Explore the collection in a new, interessting way
</p>
<p>
You have to find objects, which have similar metadata and try to match them. The displayed objects are (semi-)randomly selected from a dataset (eg. from SNM). From the metadata of the starting object, the app will search for three other objects:
</p>
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> One which matches in 2+ metadata tags</div>
</li>
<li class="level1"><div class="li"> One which matches in 1 metadata tag</div>
</li>
<li class="level1"><div class="li"> One which is completly random.</div>
</li>
</ul>
<p>
If you choose the right one, the app will display three new objects accordingly to the way explained above.
</p>
<p>
Tags used from the datasets:
</p>
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> OBJEKT Klassifikation (x)</div>
</li>
<li class="level1"><div class="li"> OBJEKT Webtext</div>
</li>
<li class="level1"><div class="li"> OBJEKT Datierung (x)</div>
</li>
<li class="level1"><div class="li"> OBJEKT Herstellung (x)</div>
</li>
<li class="level1"><div class="li"> OBJEKT Herkunft (x)</div>
</li>
</ul>
<p>
(x) = used for matching
</p>
</div>
<h4 id="to_do">To Do</h4>
<div class="level4">
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> Datasets are too divers; in some cases there is no match. Datasets need to be prepared.</div>
</li>
<li class="level1"><div class="li"> The tag Klassifikation is too specific</div>
</li>
<li class="level1"><div class="li"> The tags Herstellung and Herkunft are often empty or not consistent.</div>
</li>
</ul>
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> The gaming aspect needs to be implemented</div>
</li>
<li class="level1"><div class="li"> </div>
</li>
</ul>
</div>
<h2 class="sectionedit2 page-header pb-3 mb-4 mt-5" id="use_case">Use case</h2>
<div class="level2">
<p>
There are various cases, where the app could be used. It mainly depends on the datasets you use:
</p>
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> Explore hidden objects of a museum collection</div>
</li>
<li class="level1"><div class="li"> Train students to identify art periods</div>
</li>
<li class="level1"><div class="li"> Find connections between museums, which are not obvious (e.g. art and historical objects)</div>
</li>
</ul>
</div>
<h2 class="sectionedit3 page-header pb-3 mb-4 mt-5" id="data">Data</h2>
<div class="level2">
<p>
Democase:
</p>
<p>
SNM
<a class="urlextern" href="https://opendata.swiss/en/organization/schweizerisches-nationalmuseum-snm" rel="nofollow" title="https://opendata.swiss/en/organization/schweizerisches-nationalmuseum-snm">https://opendata.swiss/en/organization/schweizerisches-nationalmuseum-snm</a>
</p>
<p>
> Build with two sets: Technologie und Brauchtum / Kutschen & Schlitten & Fahrzeuge
</p>
</div>
<h2 class="sectionedit4 page-header pb-3 mb-4 mt-5" id="links">Links</h2>
<div class="level2">
<p>
Github: <a class="urlextern" href="https://github.com/zack17/ocdh2018" rel="nofollow" title="https://github.com/zack17/ocdh2018">https://github.com/zack17/ocdh2018</a>
</p>
<p>
Tech. Demo: <a class="urlextern" href="https://zack17.github.io/ocdh2018/" rel="nofollow" title="https://zack17.github.io/ocdh2018/">https://zack17.github.io/ocdh2018/</a>
</p>
<p>
Design Demo (not functional): <a class="urlextern" href="https://tempestas.ch/artify/" rel="nofollow" title="https://tempestas.ch/artify/">https://tempestas.ch/artify/</a>
</p>
</div>
<h2 class="sectionedit5 page-header pb-3 mb-4 mt-5" id="team">Team</h2>
<div class="level2">
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> Micha Reiser</div>
</li>
<li class="level1"><div class="li"> Jacqueline Martinelli</div>
</li>
<li class="level1"><div class="li"> Anastasiya Korotkova</div>
</li>
<li class="level1"><div class="li"> Dominic Studer</div>
</li>
<li class="level1"><div class="li"> Yaw Lam</div>
</li>
</ul>
<div class="tags"><span>
<a class="wikilink1 tag label label-default mx-1" href="/wiki/status:concept?do=showtag&tag=status%3Aconcept" rel="tag" title="status:concept"><span class="iconify" data-icon="mdi:tag-text-outline"></span> concept</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:dev?do=showtag&tag=needs%3Adev" rel="tag" title="needs:dev"><span class="iconify" data-icon="mdi:tag-text-outline"></span> dev</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:design?do=showtag&tag=needs%3Adesign" rel="tag" title="needs:design"><span class="iconify" data-icon="mdi:tag-text-outline"></span> design</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:data?do=showtag&tag=needs%3Adata" rel="tag" title="needs:data"><span class="iconify" data-icon="mdi:tag-text-outline"></span> data</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:expert?do=showtag&tag=needs%3Aexpert" rel="tag" title="needs:expert"><span class="iconify" data-icon="mdi:tag-text-outline"></span> expert</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/tag:glam?do=showtag&tag=glam" rel="tag" title="tag:glam"><span class="iconify" data-icon="mdi:tag-text-outline"></span> glam</a>
</span></div>
</div>
|
||
Open Cultural Data Hackathon
|
2018-10-29 14:51:00
|
Ask the Artist
|
Ask the Artist
The project idea is to create a voice assistance with the identity of an artist. In our case, we created a demo of the famous Swiss painter Ferdinand Hodler. That is to say, the voice assistance is nor Siri nor Alexa. Instead, it is an avatar of Ferdinand Hodler who can answer your questions about his art and his life.
You can directly interact with the program by talking, as what you would do normally in your daily life. You can ask it all kinds of questions about Ferdinand Hodler, e.g.:
When did you start painting?
Who taught you painting?
Can you show me some of your paintings?
Where can I find an exhibition with your artworks?
By talking to the digital image of the artist directly, we aim to bring the art closer to people's daily life, in a direct, intuitive and hopefully interesting way.
As you know, museum audiences need to keep quiet which is not so friendly to children. Also, for people with special needs, like the visually dispaired, and people without professional knowledge about art, it is not easy for them to enjoy the museum visit. To make art accessible to more people, a voice assistance can help with solving those barriers.
If you asked the difference between our product with Amazon's Alexa or Apple's Siri, there are two major points:
The user can interact with the artist in a direct way: talking to each other. In other applications, the communication happened by Alexa or Siri to deliver the message as the 3rd party channel. In our case, users can have immersive and better user experienceand they will feel like if they were talking to an artist friend, not an application.
The other difference is that the answers to the questions are preset. The essence of how Alexa or Siri works is that they search the question asked by users online and read the returned search results out. In that case, we cannot make sure that the answer is correct and/or suitable. However, in our case, all the answers are coming from reliable data sets of museum and other research institutions, and have been verified and proofread by the art experts. Thus, we can proudly say, the answers from us are reliable and correct. People can use it as a tool to educate children or as visiting assistance in the exhibition.
Video demo:
Data
List and link your actual and ideal data sources.
Kunsthaus Zürich
⭐️ List of all Exhibitions at Kunsthaus Zürich
SIK-ISEA
⭐️ Artist data from the SIKART Lexicon on art in Switzerland
Swiss National Museum
⭐️ Representative sample from the Paintings & Sculptures Collection (images and metadata)
Wikimedia Switzerland
Team
Angelica
Barbara
Anlin (lianganlin@foxmail.com)
concept,
dev,
design,
data,
expert,
glam
|
<h2 class="sectionedit1 page-header pb-3 mb-4 mt-5" id="ask_the_artist">Ask the Artist</h2>
<div class="level2">
<p>
The project idea is to create a voice assistance with the identity of an artist. In our case, we created a demo of the famous Swiss painter Ferdinand Hodler. That is to say, the voice assistance is nor Siri nor Alexa. Instead, it is an avatar of Ferdinand Hodler who can answer your questions about his art and his life.
</p>
<p>
You can directly interact with the program by talking, as what you would do normally in your daily life. You can ask it all kinds of questions about Ferdinand Hodler, e.g.:
</p>
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> When did you start painting?</div>
</li>
<li class="level1"><div class="li"> Who taught you painting?</div>
</li>
<li class="level1"><div class="li"> Can you show me some of your paintings?</div>
</li>
<li class="level1"><div class="li"> Where can I find an exhibition with your artworks?</div>
</li>
</ul>
<p>
By talking to the digital image of the artist directly, we aim to bring the art closer to people's daily life, in a direct, intuitive and hopefully interesting way.
</p>
<p>
As you know, museum audiences need to keep quiet which is not so friendly to children. Also, for people with special needs, like the visually dispaired, and people without professional knowledge about art, it is not easy for them to enjoy the museum visit. To make art accessible to more people, a voice assistance can help with solving those barriers.
</p>
<p>
If you asked the difference between our product with Amazon's Alexa or Apple's Siri, there are two major points:
</p>
<ol class=" fix-media-list-overlap">
<li class="level1"><div class="li"> The user can interact with the artist in a direct way: talking to each other. In other applications, the communication happened by Alexa or Siri to deliver the message as the 3rd party channel. In our case, users can have immersive and better user experienceand they will feel like if they were talking to an artist friend, not an application.</div>
</li>
</ol>
<ol class=" fix-media-list-overlap">
<li class="level1"><div class="li"> The other difference is that the answers to the questions are preset. The essence of how Alexa or Siri works is that they search the question asked by users online and read the returned search results out. In that case, we cannot make sure that the answer is correct and/or suitable. However, in our case, all the answers are coming from reliable data sets of museum and other research institutions, and have been verified and proofread by the art experts. Thus, we can proudly say, the answers from us are reliable and correct. People can use it as a tool to educate children or as visiting assistance in the exhibition. </div>
</li>
</ol>
<p>
<a class="media" href="/wiki/_detail/project:ask_the_artist.jpg?id=project%3Aweb_exhibition" title="project:ask_the_artist.jpg"><img alt="" class="media img-responsive" src="/wiki/_media/project:ask_the_artist.jpg?w=200&tok=4a7044" width="200"/></a>
<a class="media" href="/wiki/_detail/project:121211111.jpg?id=project%3Aweb_exhibition" title="project:121211111.jpg"><img alt="" class="media img-responsive" src="/wiki/_media/project:121211111.jpg?w=200&tok=cf36f8" width="200"/></a>
<a class="media" href="/wiki/_detail/project:11111111.jpg?id=project%3Aweb_exhibition" title="project:11111111.jpg"><img alt="" class="media img-responsive" src="/wiki/_media/project:11111111.jpg?w=200&tok=0ebe45" width="200"/></a>
</p>
<p>
Video demo:
</p>
<iframe allowfullscreen="" class="vshare__none" frameborder="0" height="239" scrolling="no" src="//www.youtube-nocookie.com/embed/DlABgOf0b8w" width="425"></iframe>
</div>
<h2 class="sectionedit2 page-header pb-3 mb-4 mt-5" id="data">Data</h2>
<div class="level2">
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> List and link your actual and ideal data sources.</div>
</li>
</ul>
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> <strong>Kunsthaus Zrich</strong></div>
</li>
</ul>
<p>
List of all Exhibitions at Kunsthaus Zrich
</p>
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> <strong>SIK-ISEA</strong></div>
</li>
</ul>
<p>
Artist data from the SIKART Lexicon on art in Switzerland
</p>
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> <strong>Swiss National Museum</strong></div>
</li>
</ul>
<p>
Representative sample from the Paintings & Sculptures Collection (images and metadata)
</p>
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> <strong>Wikimedia Switzerland</strong></div>
</li>
</ul>
</div>
<h2 class="sectionedit3 page-header pb-3 mb-4 mt-5" id="team">Team</h2>
<div class="level2">
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> Angelica</div>
</li>
<li class="level1"><div class="li"> Barbara</div>
</li>
<li class="level1"><div class="li"> Anlin (lianganlin@foxmail.com)</div>
</li>
</ul>
<div class="tags"><span>
<a class="wikilink1 tag label label-default mx-1" href="/wiki/status:concept?do=showtag&tag=status%3Aconcept" rel="tag" title="status:concept"><span class="iconify" data-icon="mdi:tag-text-outline"></span> concept</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:dev?do=showtag&tag=needs%3Adev" rel="tag" title="needs:dev"><span class="iconify" data-icon="mdi:tag-text-outline"></span> dev</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:design?do=showtag&tag=needs%3Adesign" rel="tag" title="needs:design"><span class="iconify" data-icon="mdi:tag-text-outline"></span> design</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:data?do=showtag&tag=needs%3Adata" rel="tag" title="needs:data"><span class="iconify" data-icon="mdi:tag-text-outline"></span> data</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:expert?do=showtag&tag=needs%3Aexpert" rel="tag" title="needs:expert"><span class="iconify" data-icon="mdi:tag-text-outline"></span> expert</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/tag:glam?do=showtag&tag=glam" rel="tag" title="tag:glam"><span class="iconify" data-icon="mdi:tag-text-outline"></span> glam</a>
</span></div>
</div>
|
||
Open Cultural Data Hackathon
|
2018-10-28 15:42:00
|
Dog Name Creativity Survey of New York City
|
Dog Name Creativity Survey of New York City
We started this project to see if art and cultural institutions in the environment have an impact on the creativity of dognames. This was not possible with the date from Zurich because the name-dataset does not contain information about the location and the dataset about the owners does not include the dognames. We choose to stick with our idea but used a different dataset: NYC Dog Licensing Dataset.
The creativity of a name is measured by the frequency of each letter in the English language and gets +/- points according to the amount of dogs with the same name. The data for the cultural environment comes from Wikidata.
After some data-cleaning with OpenRefine and failed attempts with OpenCalc we got the following code:
import string
import pandas as pd
numbers_ = {"e":1,"t":2,"a":3,"o":4,"n":5,"i":6,"s":7,"h":8,"r":9,"l":10,"d":11,"u":12,"c":13,"m":14,"w":15,"y":16,"f":17,"g":18,"p":19,"b":20,"v":21,"k":22,"j":23,"x":24,"q":25,"z":26}
name_list = []
def KreaWert(name_):
name_ = str(name_)
wert_ = 0
for letter in str.lower(name_):
temp_ = 0
if letter in string.ascii_lowercase :
temp_ += numbers_[letter]
wert_ += temp_
if name_ in H_:
wert_ = wert_* ((Hmax-H_[name_])/(Hmax-1)*5 + 0.2)
return round(wert_,1)
df = pd.read_csv("Vds3.csv", sep = ";")
df["AnimalName"] = df["AnimalName"].str.strip()
H_ = df["AnimalName"].value_counts()
Hmax = max(H_)
Hmin = min(H_)
df["KreaWert"] = df["AnimalName"].map(KreaWert)
df.to_csv("namen2.csv")
dftemp = df[["AnimalName", "KreaWert"]].drop_duplicates().set_index("AnimalName")
dftemp.to_csv("dftemp.csv")
df3 = pd.DataFrame()
df3["amount"] = H_
df3 = df3.join(dftemp, how="outer")
df3.to_csv("data3.csv")
df1 = round(df.groupby("Borough").mean(),2)
df1.to_csv("data1.csv")
df2 = round(df.groupby(["Borough","AnimalGender"]).mean(),2)
df2.to_csv("data2.csv")
Visualisations were made with D3: https://d3js.org/
Data
Hundedaten der Stadt Zürich:
https://opendata.swiss/de/dataset/hundenamen-aus-dem-hundebestand-der-stadt-zurich
https://opendata.swiss/de/dataset/hundebestand-der-stadt-zurich
NYC Dog Licensing Dataset:
https://data.cityofnewyork.us/Health/NYC-Dog-Licensing-Dataset/nu7n-tubp
Team
Birk Weiberg
Dominik Sievi
concept,
dev,
design,
data,
expert,
glam
|
<h2 class="sectionedit1 page-header pb-3 mb-4 mt-5" id="dog_name_creativity_survey_of_new_york_city">Dog Name Creativity Survey of New York City</h2>
<div class="level2">
<p>
<a class="media" href="/wiki/_detail/project:dncrsesult1.png?id=project%3Adncsonyc" title="project:dncrsesult1.png"><img alt="How does the creativity of given dog names related to the amount of culture found in the different boroughs of New York City?" class="media img-responsive" src="/wiki/_media/project:dncrsesult1.png?w=600&tok=f2a1ba" title="How does the creativity of given dog names related to the amount of culture found in the different boroughs of New York City?" width="600"/></a>
</p>
<p>
We started this project to see if art and cultural institutions in the environment have an impact on the creativity of dognames. This was not possible with the date from Zurich because the name-dataset does not contain information about the location and the dataset about the owners does not include the dognames. We choose to stick with our idea but used a different dataset: NYC Dog Licensing Dataset.
</p>
<p>
The creativity of a name is measured by the frequency of each letter in the English language and gets +/- points according to the amount of dogs with the same name. The data for the cultural environment comes from Wikidata.
</p>
<p>
After some data-cleaning with OpenRefine and failed attempts with OpenCalc we got the following code:
</p>
<pre class="code">import string
import pandas as pd
numbers_ = {"e":1,"t":2,"a":3,"o":4,"n":5,"i":6,"s":7,"h":8,"r":9,"l":10,"d":11,"u":12,"c":13,"m":14,"w":15,"y":16,"f":17,"g":18,"p":19,"b":20,"v":21,"k":22,"j":23,"x":24,"q":25,"z":26}
name_list = []
def KreaWert(name_):
name_ = str(name_)
wert_ = 0
for letter in str.lower(name_):
temp_ = 0
if letter in string.ascii_lowercase :
temp_ += numbers_[letter]
wert_ += temp_
if name_ in H_:
wert_ = wert_* ((Hmax-H_[name_])/(Hmax-1)*5 + 0.2)
return round(wert_,1)
df = pd.read_csv("Vds3.csv", sep = ";")
df["AnimalName"] = df["AnimalName"].str.strip()
H_ = df["AnimalName"].value_counts()
Hmax = max(H_)
Hmin = min(H_)
df["KreaWert"] = df["AnimalName"].map(KreaWert)
df.to_csv("namen2.csv")
dftemp = df[["AnimalName", "KreaWert"]].drop_duplicates().set_index("AnimalName")
dftemp.to_csv("dftemp.csv")
df3 = pd.DataFrame()
df3["amount"] = H_
df3 = df3.join(dftemp, how="outer")
df3.to_csv("data3.csv")
df1 = round(df.groupby("Borough").mean(),2)
df1.to_csv("data1.csv")
df2 = round(df.groupby(["Borough","AnimalGender"]).mean(),2)
df2.to_csv("data2.csv")</pre>
<p>
Visualisations were made with D3: <a class="urlextern" href="https://d3js.org/" rel="nofollow" title="https://d3js.org/">https://d3js.org/</a>
</p>
</div>
<h2 class="sectionedit2 page-header pb-3 mb-4 mt-5" id="data">Data</h2>
<div class="level2">
<p>
Hundedaten der Stadt Zrich:
</p>
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> <a class="urlextern" href="https://opendata.swiss/de/dataset/hundenamen-aus-dem-hundebestand-der-stadt-zurich" rel="nofollow" title="https://opendata.swiss/de/dataset/hundenamen-aus-dem-hundebestand-der-stadt-zurich">https://opendata.swiss/de/dataset/hundenamen-aus-dem-hundebestand-der-stadt-zurich</a></div>
</li>
<li class="level1"><div class="li"> <a class="urlextern" href="https://opendata.swiss/de/dataset/hundebestand-der-stadt-zurich" rel="nofollow" title="https://opendata.swiss/de/dataset/hundebestand-der-stadt-zurich">https://opendata.swiss/de/dataset/hundebestand-der-stadt-zurich</a></div>
</li>
</ul>
<p>
NYC Dog Licensing Dataset:
</p>
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> <a class="urlextern" href="https://data.cityofnewyork.us/Health/NYC-Dog-Licensing-Dataset/nu7n-tubp" rel="nofollow" title="https://data.cityofnewyork.us/Health/NYC-Dog-Licensing-Dataset/nu7n-tubp">https://data.cityofnewyork.us/Health/NYC-Dog-Licensing-Dataset/nu7n-tubp</a></div>
</li>
</ul>
</div>
<h2 class="sectionedit3 page-header pb-3 mb-4 mt-5" id="team">Team</h2>
<div class="level2">
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> Birk Weiberg</div>
</li>
<li class="level1"><div class="li"> Dominik Sievi</div>
</li>
</ul>
<div class="tags"><span>
<a class="wikilink1 tag label label-default mx-1" href="/wiki/status:concept?do=showtag&tag=status%3Aconcept" rel="tag" title="status:concept"><span class="iconify" data-icon="mdi:tag-text-outline"></span> concept</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:dev?do=showtag&tag=needs%3Adev" rel="tag" title="needs:dev"><span class="iconify" data-icon="mdi:tag-text-outline"></span> dev</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:design?do=showtag&tag=needs%3Adesign" rel="tag" title="needs:design"><span class="iconify" data-icon="mdi:tag-text-outline"></span> design</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:data?do=showtag&tag=needs%3Adata" rel="tag" title="needs:data"><span class="iconify" data-icon="mdi:tag-text-outline"></span> data</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:expert?do=showtag&tag=needs%3Aexpert" rel="tag" title="needs:expert"><span class="iconify" data-icon="mdi:tag-text-outline"></span> expert</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/tag:glam?do=showtag&tag=glam" rel="tag" title="tag:glam"><span class="iconify" data-icon="mdi:tag-text-outline"></span> glam</a>
</span></div>
</div>
|
||
Open Cultural Data Hackathon
|
2018-10-27 20:36:00
|
Find Me an Exhibit
|
Find Me an Exhibit
Are you ready to take up the challenge? Film categories of objects in the exhibition “History of Switzerland” running against the clock.
The app displays one of several categories of exhibits that can be found in the exhibition (like “cloths”, “paintings” or “clocks”). Your job is to find a matching exhibit as quick as possible. You don't have much time, so hurry up!
Best played on portable devices.
The frontend of the app is based on the game “Emoji Scavenger Hunt”, the model is built with TensorFlow.js fed with a lot of images kindly provided by the National Museum Zurich. The app is in pre-alpha stage.
Data
Code
Demo
Team
Some data ramblers
concept,
dev,
design,
data,
expert
|
<h2 class="sectionedit1 page-header pb-3 mb-4 mt-5" id="find_me_an_exhibit">Find Me an Exhibit</h2>
<div class="level2">
<p>
Are you ready to take up the challenge? Film categories of objects in the exhibition History of Switzerland running against the clock.
</p>
<p>
The app displays one of several categories of exhibits that can be found in the exhibition (like cloths, paintings or clocks). Your job is to find a matching exhibit as quick as possible. You don't have much time, so hurry up!
</p>
<p>
Best played on portable devices. <img alt=";-)" class="icon" src="/wiki/lib/images/smileys/icon_wink.gif"/>
</p>
<p>
The frontend of the app is based on the game <a class="urlextern" href="https://github.com/google/emoji-scavenger-hunt" rel="nofollow" title="https://github.com/google/emoji-scavenger-hunt">Emoji Scavenger Hunt</a>, the model is built with <a class="urlextern" href="https://js.tensorflow.org/" rel="nofollow" title="https://js.tensorflow.org/">TensorFlow.js</a> fed with a <a class="urlextern" href="https://opendata.swiss/en/dataset?q=&organization=schweizerisches-nationalmuseum-snm" rel="nofollow" title="https://opendata.swiss/en/dataset?q=&organization=schweizerisches-nationalmuseum-snm">lot of images</a> kindly provided by the <a class="urlextern" href="https://www.nationalmuseum.ch/e/" rel="nofollow" title="https://www.nationalmuseum.ch/e/">National Museum Zurich</a>. The app is in pre-alpha stage.
</p>
</div>
<h2 class="sectionedit2 page-header pb-3 mb-4 mt-5" id="data">Data</h2>
<div class="level2">
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> <a class="urlextern" href="https://github.com/dataramblers/glamhack18" rel="nofollow" title="https://github.com/dataramblers/glamhack18">Code</a></div>
</li>
<li class="level1"><div class="li"> <a class="urlextern" href="https://game.annotat.net" rel="nofollow" title="https://game.annotat.net">Demo</a></div>
</li>
</ul>
</div>
<h2 class="sectionedit3 page-header pb-3 mb-4 mt-5" id="team">Team</h2>
<div class="level2">
<ul class=" fix-media-list-overlap">
<li class="level1"><div class="li"> Some data ramblers</div>
</li>
</ul>
<div class="tags"><span>
<a class="wikilink1 tag label label-default mx-1" href="/wiki/status:concept?do=showtag&tag=status%3Aconcept" rel="tag" title="status:concept"><span class="iconify" data-icon="mdi:tag-text-outline"></span> concept</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:dev?do=showtag&tag=needs%3Adev" rel="tag" title="needs:dev"><span class="iconify" data-icon="mdi:tag-text-outline"></span> dev</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:design?do=showtag&tag=needs%3Adesign" rel="tag" title="needs:design"><span class="iconify" data-icon="mdi:tag-text-outline"></span> design</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:data?do=showtag&tag=needs%3Adata" rel="tag" title="needs:data"><span class="iconify" data-icon="mdi:tag-text-outline"></span> data</a>,
<a class="wikilink1 tag label label-default mx-1" href="/wiki/needs:expert?do=showtag&tag=needs%3Aexpert" rel="tag" title="needs:expert"><span class="iconify" data-icon="mdi:tag-text-outline"></span> expert</a>
</span></div>
</div>
|
Average successful run time: 10 minutes
Total run time: 21 minutes
Total cpu time used: half a minute
Total disk space used: 1.84 MB