Visualizing an image network without rendering files: the development of a methodological framework combining user hashtags with computer vision labels
Mostra el registre complet de l'element
Visualització
(3.304Mb)
|
|
|
|
|
|
Tucci, Giulia
|
|
Aquest document és un/a article, creat/da en: 2022
|
|
|
|
This article presents a method for visualizing networks of geolocated images without rendering the image files on the network. The path I followed to develop this method is the result of an intensive "data sprint" which took place during the University of Amsterdam Digital Methods Initiative Summer School 2021. During the data sprint, I developed a methodological framework to generate a network of Twitter geolocated images combining the hashtags twitted with the images and the Google Cloud Vision API best single expression to describe each image (BestGueesLabel). Considering the limitations of working with a massive amount of image data and the computational memory required to generate network visualizations, the possibility of using description tags to create image networks is promising. The images analyzed during this study were extracted from Twitter filtering for the #deepfakes and #deepfake and tagged with country code location. Thus, the hashtags included in the tweets by Twitter users provide the context and the user description of the image. This information was combined in a bipartite network with a computer vision entity, the computer vision description of the image, to generate a networked description of the whole image set. I point that this method can be considered in exploratory research when working with large sets of images.
|
|
Veure al catàleg Trobes
|
|
|
Aquest element apareix en la col·lecció o col·leccions següent(s)
Mostra el registre complet de l'element