Monday, May 16, 2016

From Agar.io to GGODD (global graphical online direct democracy)

Given our current levels of processing power and connectivity, there seem to be very juicy possibilities for new Internet interfaces that morph our online experiences into something palpably deeper, palpably accelerated, etc. 

Our computer screens have the ability to display (roughly speaking) anything, in any color. As yet, much of the meaning that we convey to each other through our screens retains the form, inherited from printing presses, of lines of black characters on a white background. It was totally natural/reasonable/understandable that this dichromatic way of using computer screens became so popular. It enabled us to apply the processing power of computers, and then the connective power of the Internet, to well-established textual methods of communication, scholarship, research, etc. Text can now be manipulated and searched with vastly greater efficiency.

We now spend a good portion of our lives, and manage a good portion of our affairs, online. Ventures with a serious potential to make our online experience significantly more vibrant, more meaningful, with more relevant feedback, will probably be of great interest.

Dichromatic text, as important as it has been historically (for example, as the basis of computer programming), is, of course, just one of many current and potential future media that can convey meaning. The ways in which it became so prevalent are, again, totally natural/reasonable/understandable, but now, we appear to be potentially on the brink of creating spectacularly new communication methods that will supersede our old friend, text.

We can imagine a graphical environment comprehensive and responsive enough that we're able to find or create an image to clearly represent any notion that we'd like to communicate in less time than it would have taken to type out a textual expression of the notion. We could use a fancy label like Graphical Supersession Point (GSP) for the hypothetical future point in time when this is accomplished. 

We could facilitate progress toward this sort of thing by looking less to the pages of printing presses and more to video games for inspiration in the basic design of our Internet portals/interfaces. This seems like a natural next step, a way of taking fuller advantage of the dynamic graphical capabilities of our Internet-connected screens.

Let's then imagine what we could add to our game-like interfaces to bring them closer to the GSP. 

There are multiplayer online games much more graphically sophisticated than Agar.io, but Agar.io's relative simplicity makes it easy to use as a metaphor or a starting point in imagining future interfaces. The existence of these games shows that many of the technical foundations are already in place for general-purpose internet interfaces in which we interact/communicate via our movements through a graphical space: where we go, what we consume, what we eject. In Agar.io, we try to consume each other, and we eject bits of our own bodies for various strategic purposes in the struggle to eat and not be eaten. This interactive, graphical experience, of being a nearly featureless blob, flying/floating/swimming through a barren space populated by other such blobs, prefigures a time when we'll be able to swim through, and cooperatively interact with, the entirety of our accumulated, digitized information stores.

Navigation & automatic space population

In Agar.io, we use our pointer to move around in the two dimensions of the rectangular playing area. As we eat and our character grows larger, our view automatically zooms out. When we lose mass, it zooms in. A key feature of our future interfaces will surely be the ability to change scale, to zoom in and out, at will, in addition to moving around in the virtual space. And, as we zoom out and more space appears around what we were looking at, or as we zoom in and more space appears within or between what we were looking at, our interface will decide, based on our explicit wishes and on other contextual clues/cues, what to display in those spaces. It will also be able to make such decisions when we're moving right, left, etc., and even when we're not doing anything. This automatic presentation of relevant/related information will make our Internet experiences visually richer, more beautiful, fluid, continuous. Instead of switching between discrete pages, with these future interfaces we can potentially fly/float/swim to any information anywhere.

Splitting the screen

Another key to visualizing these forthcoming interfaces seems to be that we'll surely make use of multiple simultaneous windows, or sub-interfaces, into cyberspace. If there's a 'rabbit hole' that we want to explore (an object into which we want to zoom or dive) within a given window, we may also want to keep the current contents of that window readily available while we're going down the rabbit hole (we might expect that we'll want to come back to the current location soon, and/or we might want to have it visible as we're traversing the rabbit hole, and/or we might want to transfer something between the current location and our destination down the rabbit hole, etc.) So we would spawn a new sub-interface from the existing one, or copy part of the existing one and paste it into a different one.

We can imagine simply having a few different windows or sub-interfaces on our screen with fairly stationary boundaries between them, maybe a large 'main' one in the middle with smaller ones around the edge or in the corners.

Maybe other sub-interfaces can take the form of circular Agar.io-like 'cells,' which lets the different-sized balls cluster and slide around each other fluidly. Maybe we'll sometimes use grids of rectangular windows. Maybe we'll sometimes prefer grid-like arrangements with more flexible, organic-feeling boundaries, resembling membranes between cells or the strands of a spider web, with regions of the web widening and shrinking with silky smoothness as we move the boundaries. Maybe sometimes the windows can overlap, each with a distinctive tab sticking out, like the tabs in our web browsers.

Other methods of organizing the sub-interfaces can be imagined too, and we can imagine using multiple methods simultaneously, in a nested/hierarchical way. The method may often be automatically determined by the context. Ultimately there may be no clear distinction between 'sub-interfaces' and the objects that appear within them, except where it's convenient to maintain such a distinction.

So, in other words, we can expect to be able to easily create indefinitely many virtual viewpoints, which themselves can become objects capable of being explored, played with, recombined, etc.

Commands

Any specialized task, for which a specialized interface has been developed, could be performed within these general-purpose interfaces, once we navigate to the specialized interface. But we can imagine two basic operations that it might be convenient to build into the general-purpose interface, simply for selecting things and moving them from one place to another. Call them Take & Put, Get & Give, Copy & Paste, etc. We could use the first operation on any item to consume it, activate it, select it, mark it, remember it, save it, create a save point or sub-interface, etc. The second operation then ejects, transmits, posts, conveys, to a specific location, the last item that was selected (or perhaps sometimes the collection of all such items since the last Eject command, etc.). We might click or tap on the screen to invoke the first operation, and click again to invoke the second, or drag and release to perform both operations in one motion, or we might use other keys/buttons.

Hieroglyphics

So if ideas like those above do help us create interfaces that let us cooperatively navigate and manipulate online objects with a new level of ease, then how could this eventually lead to the complete supersession of text as a means of communication?

As we begin using these interfaces, we'll be able to handle plain old text within them, in addition to more colorful, complex, representational images. Then we'll build graphical ontologies - organized libraries of images with precisely defined meanings - in other words, new hieroglyphic vocabularies. At first, we'll probably want to create a lot of such images corresponding to words and to mathematical/coding entities. Then new entities could emerge in these graphical systems with meanings that don't necessarily correspond to any previous spoken or textual symbols.

We'll want these hieroglyphics to look similar to whatever they represent, to evoke the meaning with their appearance. This may be easier to imagine for more concrete notions like tables and dogs than for more abstract notions like 'this' and 'that.' But for instance, we might find a circle with an arrow pointing toward the center to be a useful way of representing 'this,' 'self,' 'in,' etc., and likewise, a circle with an arrow pointing away from the center for 'that,' 'other,' 'out,' etc.

GSP, here we come!?

No comments: