this is my home. viewed from a satellite above as i am writing this. to view yourself from a satellite above you right now click the image and choose your city (or the city nearest you).
what would a map look like today? it must be multi-dimensional. it must include width (x), height (y), depth (z), time (t), information (i), and the ability to choose to view these from any point on the map, i.e. choose a perspective, with the advent of ‘real-time’ technology which allows almost simultaneous feedback, we must have the ability to view multiple perspectives at the same time, which may include a time feedback sequence, a perspective difference, of only nano-seconds.
first of all, we need the traditional geographic maps. but this is not as simple as that. the vantage point can change this. we can view the region of middle east, but how do we view it? we could consider a photograph a map of reflected light at a certain place in time. in other words, a photograph can be considered a geographic record, a map. traditionally, we might think of a map as a representation of certain features or paths as viewed from a third perspective. but what we have to question is what is the third perspective? that is we can view the middle east as we believe it to be right now. but who is we? and what is right now? in a truly interactive map we could adjust the ‘we’ factor as well as the time factor. that is, we could view the middle east in 200 b.c. as it was known by a particular person living in what is now known as germany in 1893. and how do we view it? from what point in space? perhaps it is only a conceptual space, as with an automobile map. that is, a map which merges geographical and informational data.
the possibility of making a map which includes all of our knowledge and perspectives is impossible because of the size of the possible perspectives, in other words, we only have the possibility of viewing from a relatively few perspectives simultaneously – compared with the infinite possibilities of perspectives. we would better off abandoning the traditional idea of map and focusing on a map interface. a map interface would allow you to choose the coordinates from which you would like your perspective, and also choose which layers of information you would like to view.
for a couple of ideas that start down this path, check out these links:
electronic cultural atlas institute
an atlas of cyberspaces
Internet Cartographer, by Inventix Software, is an application that works alongside your Web browser to classify and map all pages you visit. The map can be interrogated to find Web sites. (This and more images found at an atlas of cyberspaces)
a sophisticated objectification interface
|coordinates (perspective)||information||interface (way of presenting information)|
|geographic (x,y,z)||light rays||photograph (aerial, satellite, ultra-violet, first person, depending on perspective picked)|
|time (‘live’ or particular point in past)||paths||conceptual|
|person (limit of information by consciousness)||surface features||elevation contours|
|scale (microscopic to macroscopic)||trace||artistic|
|orientation (direction of view)||cultural history||traceroute|
|waves (frequency, cycle , type)|
|reactions (physical, chemical etc.)|
there are large implications for information gathering, information preservation, of the future. all creations, all information, must have the identifying coordinates tied to it. when we take a photograph, our position must be indicated. with the use of GPS we could automatically save a coordinates stamp to all photographs. when we take a photograph, it will automatically record the time, date, geographic position (x,y,z), scale, orientation, method of photography, velocity, and photographer information. the image is directly uplinked to your life’s database. it is also immediately linked and categorized within the international historical database. so that someone 40 years from now, researching that particular place on that particular day can see the hundreds or thousands of images taken from various people all over the world. from this the information can be assimilated and interpolated into a total re-creation of that day. and thus, for a fee, you can re-live that perfect day when you….
of course, information input can not be limited to only photography. a complete ‘video’ of your life can be made as you go along. recording all the information a processor can handle and store. photometric, gravitational, electromagnetic, sound, molecular, etc. we can record more than we are aware of. we could spend our whole life examining one particular second. examining every piece of data in relation to every other piece of data. creating an infinite number of relationships, viewed form an infinite number of perspectives. we now have the ability to link lives and times and places. to categorize and preserve an infinite history. billions of lives, of perspectives, of information, all linked together to show relationships. we can now examine the difference an event creates for each person who attends. but who has time to examine all of these millions of life perspectives? i mean, who watches the videos they make of the boat tour down the chicago river. how dow you skip to the good parts? give me the highlights. as the person is constantly recording every second of their life, they can bookmark certain sections when they feel excited. (much the same as photography, except that in photography you must anticipate the excitement). that is one way. but a better way is to record the human’s emotional indicators – such as “Galvanic Skin Response (GSR) Sensor, a Blood Volume Pulse (BVP) sensor, a Respiration sensor and an Electromyogram (EMG). ” (MIT affective computing) then we have the computer compile the highlights. we make a program which will search for various factors which we wish to enjoy. we could say, for example, we would like a relaxing day in north-east ohio in june in 2002. the computer could search the database for the most relaxing day in north-east ohio in june in 2002 by quickly examining the various factors of human response and learned associations (like sunny day – relaxing, rainy day – not relaxing unless…). these would be overlapped with your personal preferences data to find the most relaxing day to you. the comparison is made for you individually. maybe you prefer a slight breeze, 74 degrees, a dog, a jeep, a girl, and you prefer to view it from an 18-year old’s perspective who is most like your self. and voila, your own personalized ‘real’ movie is fed into your reality manufacturer. a total objectification which leads to the complete death of the viewer.
in fact, the distinction between the coordinates and information category are only based on our traditional understanding of the limits of our self. while it is true that traditionally we have experienced information within the limits of a first person, as one point at one place in time at a certain scale, there is no real separation between the information of the coordinates and the information found in the second category. as viewers of information, we can vastly expand the possible realities we experience. we can plot the change of velocity. we can visualize acceleration by plotting time and space, movement over that space in time, change in movement over that space in time.
any way of displaying information could be included in this interface. by mixing up traditional ways of displaying information we would discover new ways to see information. we would discover relationships we never knew existed. we could view a surface terrain at a certain time and place through a text interface while overlaying the elevation contours of people’s bodies alive there during that time period. over the top of that we could overlay a tracing of the internet paths of the use of the word ‘scissors.’ we can program the computer to incessantly search for interesting relations and show them to us in new ways. we can abandon the first person. we can create our own information terrains which include our self. our traditional perspective is only one of an infinite number of choices – of position, of scale, of time, of orientation, of interfaces, of limits.
These images show WebPath, “… a tool that unobtrusively visualises a user’s trail as they browse the Web”, developed by Emmanuel Frécon, a researcher in the Distributed Collaborative Environments group at SICS. See this paper for more information, Frécon E. & Smith G., (1998), “WebPath – A three-dimensional Web History“, 1998 IEEE Symposium on Information Visualization (InfoVis ’98), Chapel Hill. NC, USA. (This and more images found at an atlas of cyberspaces)