Looking for:
Looking for:
Acdsee pro 10 gesichtserkennung free download

A more explicit example not for photos but for music is AudioRadar [23] see figure 3. While listening to a song the system shows its neighbours, not based on the filename but its musical character, so the user can cross her collection not based on abstract values like filename or place in the directory hierarchy but its actual audible qualities.
Additionally, playlists can be automatically generated based on a mood e. AudioRadar thus uses automatic extraction algorithms to create similar results like a system that is based on extensive tagging of music. Maybe some not too far away future will bring one or more tabletop to every household or a comparable technology that allows for the displaying on and interaction with larger surfaces. Within the classical Ubiquitous Computing approach, where the three main display sizes pads inch-sized , tabs foot-sized and boards yard-sized were first defined in Xerox PARC [54], [55] , board-sized displays were used \”in the home, [as] video screens and bulletin boards; in the office, [as] bulletin boards, whiteboards or flip charts\” [54].
But placing a large display hor- izontally broadens its possibilities of usage: With devices that can display information and are touch-sensitive replacing non-interactive tables, for example in the living room, casual interaction becomes common-place. The user does not have to get up to work with the system like in the PARC approach, but can sit and thus not only extend the interaction phase because of the missing exhaustion but also interact infrequently with the system, in short interaction bursts a kind of \”parallel\” interaction with the system running all the time.
Furthermore, tables invite people to place things on them so a connection between the tabletop and these real-world objects is the next logical step like pioneered in Tangible User Interfaces [52]. As a last point, multiple people can gather around one table, which makes it a prominent starting point for multi-user interaction see above 3. Tabletop displays therefore seem to be ideal to support all four central notions of photowork see above 2.
The standard hardware prerequisite for the system is a tabletop-display that is able to recognize at least two concurrent sources of input. How this input happens e.
Target group As for the target group, the hardware requirements almost automatically lead to a younger, more computer-savvy audience.
Additionally, taking photos regularly seems necessary as well. My ideal candidate for the system is between 20 and 35 years old, has a few years of computer experience and takes up to one hundred photos a month. The background in computing is useful because of a few concepts e.
For the interaction, however, it is probably even advantageous to have no computer experience whatsoever, because of the radical break between using a standard desktop system and a tabletop interface, which lightens getting used to it for a computer novice. As for the background of those scenarios: Julia, an active twenty-something, lives in Berlin in the near future.
She uses it for casually checking her emails or surfing the web while sitting on the couch and watching TV, but also to support her in the day-to-day access and organization of her ever- growing media collection. Photography, her current favourite pastime, centres around the tabletop and its interface.
For devices to take snapshots, she owns an easy-to-use ten megapixels compact camera and her ever trusty cellphone with a five megapixels camera.
After dropping her baggage next to the front door, she drags herself to the couch and puts her feet up. She starts checking her emails and then connects her phone to the table to view the photos she took.
Although the system tried to make some sense of them, they appear a bit untidy in the otherwise organized lot. Sighing, then holding her knee with a grimace, she lifts herself up and focuses the system with a short gesture on the new photos.
By spatially separating the heap with a few quick movements she gets a general idea of the different topics she captured on digital film the system assists her by automatically grouping similar photos in handy piles and remembers the evening in Oslo, with the short warm-up in the not so glamourous bar, the few hours in the disco and the slightly inglorious return to the hotel.
Julia decides to group all photos to one event called \”evening in oslo\” and further separate this event into the three sub-events \”shady bar\”, \”partying at the sikamikaniko\” luckily, she made a note of that! Four gestures later she flips through those last photos with a skeptical expression: Although they are not as awkward as expected, most of them are blurry or underexposed.
Before going through every one of them herself, she lets the system focus on it and separate them as it sees fit. After shortly browsing through the bad ones, she deletes them all except one that might come in handy to tease her friend Lea. Content with her work she sits back and refocuses the system on all photos of the last few months, when the photos of an older party catch her eye.
She shortly flips through them with a smile before freezing: Who is that man who stands behind her on that photo? Julia frantically focuses on the one photo and the new event and browses through the contained photos in \”partying at the sikamikaniko\”, hoping or better: fearing that one of them shows the person as well. Having found the relevant photo a sigh escapes her lips: It was not the same man, partying people just all look the same.
She resets the system to the soothing, flowy screensaver mode and lies back on the couch. After hugging her mother and receiving a fatherly tap on the shoulder she leads them into the living room to give them an update on what has happened in her life in the two months since they last visited. To support her tales, she starts up the photo application and displays the pictures of this time frame in the overview mode.
She focuses on the events from two months before and flips through the photos, enlarging one once in a while, not only to illustrate her narration but also to help herself remember all the little details, that a photograph can just keep so much easier than the mind.
Additionally, her own organizational structure plus the titles of the events act as a guide for her. While her mother listens to her intently, her father, a self-appointed computer pioneer back from when the things still had cables, is at the other side of the table, not really giving her much attention, and rummages through another set of photos but luckily not the embarrassing ones.
When Julia sees his face taking on a skeptical expression, she already knows what had happened, before he starts apologizing. But with a sigh and a few rewind-gestures she restores the organiza- tion her father just messed up.
While her parents browse through a few older galleries from their last vacation together, Julia goes to the kitchen and fetches coffee and cake. As she puts the dishes on the table, the system subtlety lets the photos flow around them, so cups and plates do not cover something, and even crumbs are spared.
Her father is fascinated and finally vows to quit joking about the cuts he had to make in his budget to afford this monstrous table. While Julia is still in the bathroom, Lea already connects her digital camera to the table to upload a few photos she took last week. They appear in a certain corner of the screen, spatially and visually separated from the others.
When Julia joins her at the table, both women shortly leaf through the new photos and Lea tells about them. Finally she shows one with a smirk, where Julia is not quite looking her best. They take place at opposite sides of the table, put their fingers on the two concerned pictures and move them on three, Julia hers to her private region and Lea to the little symbol representing her connected camera.
While Julia is still grinning triumphantly, her friend quietly focuses the system on one of the last of the new photos and enlarges it. Julia loses her grin, as she sees herself looking even dumber than on the other one. Letting the system do all the organizing would of course be a very pleasing alternative but it is unrealistic that such an automated solution could provide the same degree of accessibility and speed of use like one that was built by the user in the shape of her mental image of the collection.
While it can of course be supported by automatic methods, the last part deleting a bad photo should always be left to the user. This therefore sums up all activities surrounding convenient presenting as well as making the digital information accessible and copying objects within the application. The chance to browse well is of course coupled with a plausible organization structure.
Supporting these tasks is the main requirement for the application. The application should therefore adapt to this situation and provide every user with a convenient experience. With the application centering around photos, every user should be able to bring his own collection into the application and share it with others move and copy photos between different collections. Additionally, the designer has to anticipate that users are not standing still at the table but change their positions and even leave and return after periods of time.
While such a setup can of course be used to build a classical desktop application based on windows and menus, new forms of interaction, maybe in a variation on Direct Manipulation [26] , can be tried and used to better support novice users and let experts perform their tasks more quickly. The design should also try to bridge the gap between digital and analogue photos by employing an interaction scheme that borrows many aspects from the real world.
Additionally, if the hardware does not support it, a tabletop application can never be sure where the user is currently standing. So the designer should not rely on one fixed position of the user, but allow for an easy rotation and make use of general circular symmetry.
So it is necessary that the application works just as well or even better with a few thousand and up to ten-thousand photos. In such dimensions different rules apply for the interaction with as well as the presentation of photos and the designer should take that into account. Informed Browsing should provide solutions for handling such numbers of media, so its main tasks, namely Overview at all times, Details on Demand and Temporary structures, should be supported.
Giving the user simple ways to regain overview and also make sure that all objects are accessible at any time and do not disappear in some dark corner of the interface should have a high priority. A clear organization structure see above might help, so the user should have enough freedom to create a fitting one. But another point that follows is to present important information as clean and concise as possible by cutting away unnecessary parts. In short: Keep it simple!
So my main design goal became the elimination of this shortcoming in building an application that could support a realistically sized photo collection of several thousand photos. I researched several existing concepts, asked friends how they do it and found that most of what the literature says is obviously true: People think that tagging is useful but too much of a hassle for their whole collection [43] , though they try to do it when publishing photos on internet platforms like flickr [82].
On the other hand, almost everyone I asked had come to the conclusion that some organization was necessary and relied on a homebrew solution based on a naming convention and directory structure in the file system. Giving directories a date-event-keywords name seemed to be enough, special photos of a memorable event, of personal relevance or excellent quality were additionally marked by some. The standard photo workflow was: 1. Attach the camera to the PC 2. Copy all photos to a temporary directory 3.
Create and name directories based on the contents 4. Giving the user the possibility to sort objects into some sort of hierarchy especially makes sense in the context of Informed Browsing, which, as the name implies, also relies on the back- ground knowledge of the user.
Letting the system do all the organizing necessarily discards infor- mation the user might have associated with a photo and which is not directly contained in some way in it e.
PhotoHelix did not support hierarchical structures \”events\” clusters could only contain photos – no other events , but especially in a photo collection hierarchies make sense: The standard photo opportunity of the hobbyist are the vacations [53] , which already inherently carry a hierarchical structure, even if it is only \”Holidays\” – \”Arrival\”, \”Fourteen days at the beach\”, \”Departure\”.
The typical approach of course is to build the classification based on time, with sequential segments that are further divided. But users might also sort their photos by topics, persons that are on them, or even colours. Yet, here some basic problem becomes apparent: Taxonomies force the user to put an object into one fixed category as well as link the categories in one pattern.
So a carefully constructed \”People on photos\” organization falls apart if one has photos with more than one person in it does the photo of Simon and Tanya belong to the category \”Simon\” or \”Tanya\” or both? Addition- ally, if one searches for the category \”friends\” does the system also return \”acquaintances\” and all the subcategories of \”friends\”?
Taxonomies have their limits cf. In a hierarchy there has to be some highest category, some place where all other categories fit in and which has to be there to keep it all together but which would be of only little use to the user \”photos\”? The second-highest classes would be more essential. With the backgrounds of tabletop computing and multiple concurrent users I developed the notion of \”streams\” as high-level containers that consist of multiple photos from the same source and are sorted by time.
Sources could be different cameras with which a photo was taken, for example, a digital camera or a cell phone. But there might of course be other sources as well: Scanned ana- logue photos, photos from another digital camera, relevant photos one got from a friend or found on the internet, different directories in the file system. If we draw a higher line of abstraction, each source might be a different user of the system, with the previously mentioned personal sources as categories on a second level, which supports more than one user and by copying photos from one stream to another the sharing of photos between them as well.
It breaks down the gridlocked expectations of people who anticipate certain standard interaction patterns because of their previous experience with computers and allows the designer of an interface to establish completely new modes of communication. Additionally, the original WIMP-approach is a product of the late seventies, adapted to the hardware of this time.
A paradigm for the possibilities of current hardware may fail on standard PCs, but the tabletop platform could get away with it. One approach is to rely on existing, proven methods like direct manipulation [26] and combine them with a physics engine. Agarawala et al. Photos should at least in part behave like their analogue counterparts, making it possible to shift them around on the table with a finger and rotate them easily.
Picking up a photo and analyzing it in detail is obviously not possible, but at least some of disadvantages of physical objects like aging, photos sticking together, etc. Giving users the possibility to treat their digital photos just like analogue ones, who they have learned to love and treasure from an early age heightens the ease of entry and increases the general usability, improves their impression of the interface and makes them more inclined to stick with a purely digital version of the pictures and not let them have developed.
After lessening the downsides of interaction with digital photos, the interface should also provide the users with the advantages: Digital photos are indestructible, can be copied ad infinitum, can be easily enlarged and shrunk and manipulated in highly creative ways without scissors or glue.
Tabletop applications could enrich the way we interact with photos, allowing directly \”touching\” objects, cutting out portions of photos without destroying them, drawing on them, adding hyperlinked, hideable notes, putting them in multiple hierarchies, virtual albums etc. An abundance of applications is imaginable. Especially the task of building a photo album could be highly facilitated: A study by Frohlich et al. Still, hardly any of them was able to fulfill this goal, because of the tediousness and complexity of the task that \”often fell to the wife or mother in the families\”.
Exploiting the directness of multi-touch interaction and the robustness and repeated applicability of digital information could strip the task of building an album of its physical dullness, grind out the creative core again and maybe make it to an activity the whole family can share, sitting at their living room tabletop display. Displaying a large number of photos A basic problem I had to overcome was how to display a larger number of photos.
We as human beings interact daily with physical objects and must have gathered some skill in it. Yet, many studies suggest that a 3D-interface has no clear advantages to a 2D one [9] and might even yield disadvantages in spatial memorization [10], [11] or general usability [12]. This can happen either continuously or on fixed pages. The disadvantages of this method are the absence of the global context and the tediousness of scrolling through a large collection of items with only a fixed number of items visible at one time.
The advantages are clear: The user can control how much information she wants to have at any given moment, can see the local and global context of an object and first get a quick overview and then enlarge an interesting object without having to scroll through a number of pages.
Systems that implement this notion of Pan and Zoom are for example Photomesa [7] and MediaBrowser [17]. The main flaw of such an interface is its scalability: Zooming out and getting an overview of the whole collection only works up to a certain number of items, before their representations shrink to unrecognisability.
An example from \”Pad\” [35] shows only the years in a calendar on the highest level and gradually fades in months and days as the user keeps on zooming in. Other exemplary interfaces are Time Quilt [27] and Calendar Browser [20] see above: 3 that visualize photo collections by using this approach. Semantic zoom alleviates the main problem of a zoom-interface while preserving its advantages, but the automated summarization process necessary to do so can of course fail and leave the user no choice but to scroll laboriously through the whole collection.
In the context of Informed Browsing, semantic zoom can mean the combination of similar objects to one with a clear representative. I will describe three different stages, from the initial design via the first actual one to the second and final one, that was mostly shaped by the experiences during implementation and the results of the focus group.
The differences between the second and third one are, with one exception, only drops of features, so I will present the second design in detail and depict changes to the implemented one later. First design In my initial design, all streams were placed vertically on the screen, with two users sitting face to face at the sides of the tabletop, each having a personal workspace on their side of the table 4. Streams could be made visible by toggle-buttons on one side of the display, to let the users narrow down the current focus.
All photos were sorted along a time line and multiple photos were combined by similarity with one representative if the available space did not suffice. Single sections of the time line could be scaled independently for both users with a two-fingered gesture – streams on each side of the screen would be arranged according to the corresponding time line the vertical position of a stream could be changed by dragging it with one finger.
The vertical axis would allow several different scales e. Those substreams could also be moved to the personal workspace and handled in detail. Changes made in one workspace would be propagated to all instances of a photo. As an additional way of organizing photos and especially conserving all the little details that would be lost otherwise, photos on the lowest level on the stream hierarchy could be connected and furnished with additional comments and titles 4.
Those details would only become visible if the corresponding stream was enlarged enough in a form of semantic zoom. Collaboration was supported by letting the users combine their workspaces to a large one in the middle of the screen, placing a small copy of the stream-view on their sides 4. Both workspaces were scalable, so the need for space could be dynamically adapted, going as far as switching to a pure one-user operational mode by shrinking the second workspace to its minimal size.
The way to the refined version This first approach already had many of the later features and all basic ideas in it and seemed promising. Additionally, all the fixed elements especially the stream-buttons would take up a lot of screen space even if they were not needed too often.
So an elaborate system to add written notes to photos and events would probably have been hardly used in practice. The problem of \”forgetting details of people and events depicted in old photos\” [18] still remains pressing, but forcing the users to write long messages on a touchscreen is no elegant solution. Different, complex branches are uncommon and would at most appear between photos from different sources which would not have been supported by the system anyway.
This plus the above tediousness of text input made the idea seem less intriguing. I started to think about different aspects of my sketch and it seemed to do much things right, but have an overall patchy nature: Some things would probably work but did not fit in really well. The number of users is flexible and its fluent change is supported by the interface.
Having these concepts in mind, I refined the initial design and produced the first actual version. Higher objects have to contain lower objects no higher object can exist without containing at least one lower object. Intermediate steps can be skipped e. The lowest objects on the hierarchy, photos, can be contained within each other object, i. Objects belong to two categories: They are either created and manipulated by the user workspaces, clusters, photos or dynamically by the system piles.
At the top of the object chain lies at any time either the background or a workspace. The latter are unstable, i. Piles are built by the system if needed, so a photo might belong to a pile on the one view while it is unpiled on the other, which is why they have no abstract version either.
Every user interaction with the system can thereby classified into one of two categories: It is either local, meaning its effect is restricted to the current view and does not influence the model, or global, where the model is changed and these changes are propagated to all visual representations to keep them consistent.
As an example, changing the scale of a photo on one view is a local action and has no impact on the size of other photos because the display size is a purely visual attribute and is not saved in the model , while combining three photos to a cluster is a global one, modifying the model as well as representations of the photos on other views.
Because of the possibly destructive nature of global actions I initially thought about making a virtue of necessity and using only two-finger gestures for them. The rationale was that the SmartBoard the target hardware of the system see 5.
Thus, it would be clear that every global change was an agreement between all users and no user could change e.
If she is currently not working on something, maybe doing things away from the tabletop and not paying attention to the happenings on screen, global changes could slip by unnoticed and unconfirmed. Second, limiting the interface to a certain hardware restricts its spread and outlook. Third, one local action, namely the scaling of a photo, is much more convenient and natural to do with two fingers see below than with one the one-fingered alternative would have been using a button like in Photohelix or touching the border like scaling a window in desktop applications.
Additionally, it allows holding an object with one finger while repositioning the other more or less like lifting a mouse from the table board , which is handy in some situations where releasing the object would cause it to snap back see below. That means that for example translating a photo is performed by the same gesture as translating a pile or a workspace.
Transforming an object Most interface objects can be transformed i. Transformation is achieved by using two different techniques, that combine either translation and rotation RNT or rotation and scaling RNS.
This rotation is based on simplified \”pseudo-physics developed and adjusted for interaction ease\” [32]. A pure translation without rotating the object is possible as well: If the user touches the object near its center, the image is treated in the same way but not rotated see figure 4. It is similar to rosize, used by Apted et al [5]. In RNS, the object is scaled and rotated to keep the two input sources always at the same relative positions, i.
Scaling is unlimited but restricted by the size of region where input points are recognized normally, the corners of the tabletop. But this restriction can be circumvented by lifting up one finger, repositioning it and continuing with RNS. The translation between the two interaction techniques is fluent: Lifting one finger or placing it on the table causes the switch, allowing a quick and uninterrupted interaction. Advanced interaction Transforming an object is the most basic activity in flux and probably used the most, which is why it is directly triggered if the user touches an object and moves the finger.
Still, to provide more than just those general actions the interface has to support some kind of input mode change. There exist several approaches to this problem in pen-based interfaces for an overview see [33]. The user is now able to perform more complex actions, namely creating new clusters by using the Circle-gesture and creating a new workspace by performing the Lasso-gesture see figure 4.
The resulting polygon is drawn in red and automatically closed, covering a certain section of the screen. New points are added on moving the input and the display updated accordingly.
Lifting the input source from the table is interpreted by the system as ending the gesture and the result forming a new cluster from the enclosed objects is shown.
Moving the input now lets the user draw the \”tail\” of the lasso. If the input source is lifted, the gesture is finished leading to the creation of a new workspace with its center at the end of the lasso and containing copies of the enclosed objects. If no objects are within the Lasso, an empty workspace is created. A second level of advanced interaction becomes accessible if the user continues to hold the finger down after the cursor switched from red to blue. After waiting for the same amount of time again an object-specific marking menu [30] appears, that allows further manipulation of the affected object marking menus are available for views and clusters.
By moving the input source the user can choose an option from the marking menu and conduct it by lifting the finger see figure 4.
Therefore, giving the user a way to undo an unintentioned action is highly important. Yet, this undo-feature would not be as commonly used as e. The different interaction techniques are scaled along the time-axis depending on their relative oc- currence. The most common interaction is accessible without waiting, while the most uncommon one needs a longer waiting time to become available.
After the marking menu if any appears, the user has to hold still for a last timeout to access the undo-mode a reasonable time for one timeout are milliseconds, so the third one takes only one and a half seconds of waiting. The marking menu disappears, a light gray veil covers the interface and the user can undo and redo the last changes by drawing either a counter-clockwise circle for the former or a clockwise one for the latter.
One full circle encompasses more than one action and if the user draws more than one circle the speed of undo or redo is adapted to it, meaning that while the first drawn circle might undo ten actions, the third one could do the same to thirty, which lets the users \”scroll\” faster through whole portions of time. The number of actions that are affected by one circle depends on their general impact: Global actions weigh more than local ones and need more drawing time to be un- done e.
Abrupt actions like the deletion of an object are shown gradually as well to allow the user the estimation of its impact ending the undo-mode in such an intermediate state leaves the system at the last fully active position in the history.
The undo-mode is restricted to the view it was started on, which means that on one workspace only the actions that were performed on it up to its creation can be undone. After lifting the input source again, the undo-mode is stopped and the system reset to the chosen position in history. In the following, I will describe each interface element and its assigned interaction options in more detail.
Background, Workspace, View : The background and a workspace are two concrete variations of a view on the underlying model of photos and clusters so the term \”view\” is used as a synonym for the other two.
The background is a special case of a workspace, because it cannot be closed or translated and shows the whole photo collection at any given time. Additional workspaces can be created by the user with the Lasso-gesture see above and are shown on a layer above the back- ground. A workspace contains only a certain section of the collection but can of course contain the whole as well and can be manipulated by the whole range of transforming gestures.
It is closed by scaling it to a small size with two fingers it turns red in the process to signalize this to the user. This physical behaviour not only leads to a more natural feel in interacting with, for example, photos see above but also has the side-effect that all objects are visible at any time and no two objects can slide below one another. Plus, users can quickly free a screen region by wiping around it with one object and pushing all other objects away.
The concept of workspaces tries to fulfill the options proposed for movable work surfaces by Pinelle et al. A great number of objects might be on one view at the same time, so the system provides a way to arrange them according to a scale.
By waiting for two time-outs the marking menu appears see figure 4. The objects lose their physical nature and are rearranged and in the process combined to piles.
This piling becomes necessary if the screen real estate does not suffice to show all objects in satisfactory sizes. The different available orders are: Figure 4. All photos that were taken during this period are arranged within it. Their position additionally depends on their affiliation with a cluster, because all main clusters are placed above one another.
If there is not enough space available in the column, photos are combined based on their vicinity in time. The other columns are shrunk accordingly and all photos are rearranged and either unpiled or piled depending on their new size. Additionally, they try to lie near other clusters that are similar to them. This grouping of similar objects is repeated within them: Contained subclusters act the same and contained photos are not only piled based on their similarity-values most similar ones first , but also move near other, similar photos or piles and even in the direction of other main clusters, whose contents look like them.
The more qualitative one photo is the further it lies to the left. Photos : Photos are the main objects in flux. Each photo has attributive metadata, like its original resolution, the date it was taken, its quality value determined by the application etc.
This information is centrally held within its model version. The possibly multiple visible versions show the photo and can be scaled, translated and rotated. They are squarish with a white border, that allows the users to quickly distinguish two photos and estimate the number of visible photos at first glance. Additionally, it reduces the visual clutter by giving all photos a uniform appearance.
Photos can be combined to clusters with the Circle-, or copied onto a new workspace with the Lasso-gesture. It is possible as well to drag a photo onto a workspace to create a copy of it there, or drag it from a workspace onto the background to remove it no additional copy is created on the background.
Piles : Piles are not created by the user and have no counterpart in the model: They are fragile objects that can only be created by the system and are used to reduce the overall visual information. Piles are depicted as pseudo-three-dimensional packs of pages that allow the user to estimate how many objects are contained. The user can interact with piles in the following ways: She can translate and rotate them with the standard RNT technique, but they are not scalable: If a pile is touched with two fingers one finger marks the position of the bottom end of the pile while the other marks the top end.
All photos are placed accordingly between the two. The pile can be \”closed\” again by bringing the two fingers together. A pile can be dissolved by waiting two timeouts. In this case, all contained objects are shrunk so they take up roughly the size of the pile in total and are placed next to each other.
Each object is then treated again like an unpiled one. Once a pile is dissolved, it cannot be restored again – only by re-arranging its view. A copy of a pile can be brought to another workspace by either drawing the Lasso-gesture above it or dragging one of its ends onto the view. Clusters : Clusters can be created by the user and are used to structure the collection. The top- level clusters are called streams and depict the source of the contained photos.
Each cluster has a color and a name that the user can choose. Clusters can contain all other objects, even clusters, which means that they can be nested and support hierarchical structures see above , but every object can only belong to one cluster. The borders of a clusters are shown in its colour and allow the visual attribution of objects to the cluster.
A cluster can be created by using the Circle-gesture around other objects. Its parent cluster is then defined by analyzing which cluster in the cluster-hierarchy is the lowest one that contains all affected objects, which then becomes the container for the newly created one. If no common container can be found because the objects belong to different top-level clusters , the new cluster becomes a stream.
Clusters can be translated, rotated and scaled just like other objects. More advanced interaction with clusters is performed by using their marking menu see figure 4. By deleting a cluster, all contained objects become parts of the parent container either another cluster or a view. A whole cluster can be moved to another workspace by dragging it on using the Lasso-gesture only moves the affected objects into the workspace.
They should have enabled the user to interact more freely with her collection and adapt flux to her needs. In the following, I present these dropped features. Lines Lines were planned as a means to add another, non-hierarchical layer of organization to the collection to address the strictness of hierarchical taxonomies.
The idea was inspired by network visualizations [22], [51] that use lines to connect objects. Lines in flux would have connected arbitrary objects and would have had a user chosen colour and thickness.
The user would have been free to use them as she would have seen fit. Exemplary uses would have been adding a second layer of organization vertical to the existing, cluster-based one e. To reduce the visual clutter within the interface, lines would not have been visible all the time. They would have appeared if the containing object i. In the second case, more and more neighbouring lines i.
Textblocks Textblocks were freeform squarish objects containing text the user could enter with the finger. Their use was again not predetermined and they could have been used for example to annotate a photo with additional information probably connecting the Textblock to the photo with a line , add a textual background to a cluster or create tags. Especially the creation of tags would have been supported by another order for workspaces \”Text\” , that would have worked akin to the similarity order but would have used the titles of clusters and the text from Textblocks to position the screen elements.
The nearer a Textblock lied to a photo, the more relevant it would have been seen for it and Textblocks with identical texts would have been treated as the same. Thus, the user could have created her own photo tag cloud a type of weighted list most prominently used by certain web pages like flickr [90] or del.
Textblocks would again only have become visible if there was enough space available or the user would have interacted with an object in their vicinity with gradually more and more Textblocks appearing if the interaction lasts for some time. Boxes Inspired by the advantages of digital photographs, I created the concept of Boxes for flux. A Box was supposed to be a container for visual information, just like a photo, but not derived from an existing file on the file system but newly created within the application.
The user should have been able to create a rectangular object and paste other objects or parts of them into it. A Box would have provided an edit-mode, where it would have been placed behind the other objects and could have added the visual information of overlapping parts to itself, just like a regional screen- shot. Boxes would have been treated just like regular photos, which means they could also have been put into piles, arranged according to their similarity to other photos, etc.
Maybe the options for manipulation of a Box would have been taken over for photos as well in the end, to spare the user the shortcut of creating a new Box and pasting the photo into it to make it manipulable. Boxes would have been visible the whole time at least if they did not disappear in a pile.
My aim was to gather whether the thoughts I had about Informed Browsing and flux were valid or if maybe there existed some general flaw that I overlooked.
Another reason was that I am not part of the target group, because I was not making enough photos, so I was interested what people who worked with a larger number of photos would think about it. The focus group had 3 male and 5 female participants. Everyone of them had expert background experience with computers, with 6 of them being students of Media Informatics, 1 PhD-student of Business Informatics and a master in Computer Science and 1 with a diploma in Media Informat- ics.
I invited only computer-savvy people to get more concrete suggestions and criticism and rely on their existing knowledge.
People with a different background might become swamped by the new impressions and direct their criticism if any not at the given system but the hardware or the whole setup.
Each participant additionally had a background in photography. Their experience ranged from mainly amateur photography with a digital compact camera to several years as a professional photographer. The whole event was filmed with two cameras and had roughly this structure for my full script see Appendix A : 1.
Introduction of project and the participants. Photos: How are photos analogue and digital ones organized, how and why are they ac- cessed again. Photowork: Presentation of the general photo workflow by Kirk et al. Informed Browsing: Presentation of the concept, what metadata could be used. The first block of questions addressed the way the participants worked with photos. We developed different ways to organize analogue and digital photos and compiled them on a whiteboard see table 4.
For the topic of tagging, the participants understood the benefits but did not perform it for their whole collection because of the amount of work. But while it was not used on the local collection, online portions of it were tagged e.
A popular organization scheme was creating a directory on downloading pictures from the camera and giving it a specific name in the format date-location-keywords. Some participants confirmed to use that and one even said that he additionally \”[has] directories for topics that are connected by file system links [to the photos]\”, as a second, topic-based organization. Some of the participants used explicit photo software and here mainly the packed-in one coming with the camera e.
Web portals were used by some, but sparingly and were seen as no alternative for a local collection. Privacy concerns were shortly touched in this context, with one user complaining about people uploading private party snapshots to unrestricted websites like flickr or lokalisten [76] , while another praised the finely adjustable privacy settings of facebook [68] , that allows the user to make photos available to a restricted group of people only.
Failed photos were kept by some in a special directory for a possible future use, or if the photo was the only version of some motif one had.
One said that she deleted photos \”only if one cannot use them at all, which is true for a really very, very small section [of all photos]\”. Selecting happened partly directly on the camera for photos that were obviously botched all black, eyes closed or then on the PC after downloading from the camera. I gave two concrete tasks to make the participants evaluate their methods of organisation. While the first one find a photo from a party that was exactly one year ago seemed easy for them with their date-based directory structures, the second one find a good photo of a parent or the partner was harder to do without having a concrete photo and its date in mind.
Tagging was named as a solution, but it was, as already mentioned, seen as tedious and not available on the file system-level only marginally by putting tags in the directory name but application-dependent which many did not use. I continued by explaining the ideas behind Informed Browsing and its emphasis on metadata.
We then compiled sources of metadata on the whiteboard see table 4. Afterwards I explained the concepts of overview-at-all-times, details-on-demand and tempo- rary structures, with overall acceptance. After describing the interaction techniques for photos moving, scaling, creating clusters we went to the Instrumented Room and played around with the system.
The prototype I showed during the focus group see figure 4. All photos could be sorted by time or placed randomly and were arranged in a rectangular grid on the background. If the available space did not suffice, the system built piles from either temporally near or similar photos.
The two-finger open-and-close gesture of piles was not available, but all photos in one pile were connected by an elastic joint, which meant that by dragging one of the photos all the other followed in a short distance, allowing for an alike effect.
Piles were dissolved by touching them for half a second. Marking menus were not implemented as well, so changing the order of the background was performed pressing a key on the keyboard. The physics engine on the other hand was fully operational and moved all objects in a realistic fashion one participant commented \”It looks like water, like they are lying in the water.
Collisions when two users tried to interact with the system at the same time were accepted and managed by ad-hoc social protocols. Participants also asked spontaneously why a certain pile contained only two photos and had a third, similar one, not in it because the number and height of piles was connected to the available screen space and the number of objects photos and piles visible at the same time and if piles would be created if the screen space shrunk, for example because a photo was enlarged.
Also, piles were difficult to distinguish from single photos, since they only contained two photos, because of the number of total photos and the screen real estate. After showing the prototype, we returned to the meeting room and discussed the system in detail.
Dissolving a pile was not seen as optimal, because \”you send all this nice automation to hell\”; so piles should, once opened, be combined again after the user stopped interacting with the contained photos or the system should at least group photos if there is not enough space available. It was again mentioned that it was seen as a serious flaw if some obviously similar photos were not grouped into a pile.
Another point was the inability to gather from the looks of a pile the number of contained photos a proposal was putting their number as an overlay on the pile. One said in conjunction with the scaling, that it should not be possible to scale a photo to more than its actual resolution or give each photo a fixed maximum size and allow unrestricted resizing of parts of it only within this border, probably because more than one participant had difficulties to use the two-finger scaling without enlarging the object to the whole screen size.
One participant brought up the topic of 3D visualization which resulted in a discussion about its downsides which perspective to use if users are gathered around the table, missing context information and generally confusing with the ever-changing piles and advantages more space, pushing photos to the side. She continued by proposing \”planets\” at different coordinates, that hosted photos and could be reached by navigating the 3D-space, which sounded interesting but was a too radical break from the actual design.
This last question should provoke the participants to really think about the disadvantages of the system. One said, that it would be a cool way to show new photos from a vacation to friends, e. My basic train of thought was confirmed and I could take with me many valuable suggestions. I also gathered experience from the event as such and afterwards drew conclusions as to what was not optimal. Additionally, the weather outside was warm and very close, so towards the end some participants were clearly tired and contributed only marginally to the discussion.
The first part of the topics regarding the ways to organize and work with photos was def- initely interesting but could have been cut from one to half an hour. Also, the theoretical part about photowork and Informed Browsing was a bit too extensive as it was too novel a concept anyway to provoke spontaneous discussion.
This was especially the case for the discussion about flux right after the demo, where my first question \”What do you think of it? While this brought some interesting suggestions after some time it also made it harder for the participants to answer because they were unsure whether their contribution would be fitting and not to drift off too far from the actual issues.
As a warm-up I had two rounds at the start of the focus group, where everybody had to introduce him- or herself, but the effect of it lasted only about half the duration of the event. Maybe regularly interspersing tasks for all participants would have been helpful. This is absolutely great!
I also had to work with the limits of the hardware I had discovered in the meantime. So, after some contemplation I finalized the one design which would be the eventual shape of flux see figure 4. It should have worked for users as a concise starting point into the collection, where they could have gained a rough overview of the collection and delved deeper into it on a separate workspace.
But it probably could not provided that in its current state. The main flaw was the physics engine: While it certainly looked nice and was fun to use, it was not practical for this special use case. First, the \”swimming\” photos generated a lot of visual noise – it was hard to concentrate on the collection with the constant colliding and drifting in the corners of the field of view.
Second, users tended to collide with one another while interacting with the system even if they worked in different corners of the screen. This happened because collisions quickly propagated through the whole background even on translating a photo only a small distance. With the scaling of photos it became unbearable.
Third, the physical movement of objects made an order useless because it was always short-lived. Shifting one photo a bit triggered a chain reaction that moved most of the other ones too. So under- lying time scales etc. Furthermore, while the problem might not be as pressing for one user who might remember where she moved her photos and can quickly reset the order, it becomes much worse for multiple users who can leave the table and miss changes and return to find a totally different setup.
Letting the users freely translate the objects on the background leads ordering them ad absurdum. In the current design, objects on the background are no longer affected by the physics engine. They are fixed to their positions and cannot be moved, not even by the users. Those copies can be manipulated using the standard RNT and RNS gestures but are dissolved again once the user lets them go.
If an object has an active copy it gets a whitish overlay to symbolize that. Figure 4. While the copy is active the single photos can be manipulated just like the unpiled ones. The rest of the interaction stays the same.
Clusters can be compiled with the Circle-, workspaces with the Lasso-gesture and copies of photos or piles can be moved to an open workspace by dragging them above it and letting go. By fixing the positions of the main screen elements the above problems are compensated and all important actions are still possible, even playing around with physical photos, because the physics engine is still running for all workspaces and even amongst them.
Streams and the Stream-order Having drifted away from the idea of streams I returned to it after some talks with my tutor, who especially liked about it the notion of the inherent separation of collections by different users simply as single streams for each of them. While I might have distanced myself from it conceptually, they were not hard to bring back implementation-wise as top-level clusters before, all photos were initially unclustered like in the focus group-prototype stemming from the configuration file i.
The prerequisites, that every column uses the vertical space as extensively as possible and even empty columns have the same width like the others, led to a situation where the largest part of the screen was filled with whitespace and a few packed columns, which was more or less the com- plete opposite of my expectations.
The problem, of course, was that the linear time line was global, spanning all streams and mostly condensing them in one or two columns. While it might not have met my expections, it certainly fulfilled the function of letting the users see temporal correlations between different streams and see patterns see above in the structure of the photo collection.
So I did not want to drop the Time-order in favour of a new one and made the new order Stream an addition to the existing ones. The Stream-order separates the screen vertically and gives every cluster room according to the number of contained photos.
Those are aligned along the horizontal axis depending on the date they were taken on, so every cluster fills the whole width of the background. Piling is special, as the system does not pile photos based on the time they were taken alone but their visual similarity as well. The rationale behind it is that different bursts of photos are not necessarily separated by time e.
The effectivity of such an approach has already been shown by Cooper et al [13]. Additionally, in the Stream-order there is no minimal photo size, so the user can be sure that every pile contains only similar photos without a minimal photo size the photos can become very small in certain configurations. Clusters within the main cluster are singled out and shown above its parent cluster with their own internal time line as well. Additional user-created interface elements As one of the first parts I dropped the earlier men- tioned Lines, Textblocks and Boxes.
Still, the ideas as such where good and could probably be included in a future version see 7. History and Undo A powerful and elegant feature, Undo, was just not manageable in the avail- able time. Especially the \”winding back\”-metaphor plus the corresponding visualization with a fo- cus on the currently active objects inspired in parts by TimeMill [50] would have been not only useful but also enjoyable to watch. With additional features like separation between users which the current hardware does not support as an extension to the separation between workspaces a fine tuned history management would have become possible.
This Undo-feature also has to wait for the next version see 7. Reducing the complexity of the similarity order The last point is no complete drop of a feature but a simplification. The Similarity order as described above proved to be difficult to implement and would probably not have been too useful. Alle Apps dieser Katergorie findet ihr am Anfang des Artikels. Hier kannst du dich mit deinem contentpass Konto anmelden.
Dark Mode. Netzwelt PUR. Cymera Cymera ist Smartphone-Kamera und Bildbearbeitungsprogramm in einem. Kultcamera Kultcamera downloaden und Fotos im Retro-Stil aufnehmen!
YouCam Perfect Pimp my Selfie. Camera Plus Profi-Fotos mit dem Smartphone. Stille Kamera Smartphone-Kamera erweitern. Kultcamera Fotografieren im klassischen Stil.
Sun Seeker Sonnenuhr und Kompass in einem. Hipstamatic Fotografieren mit Stil. Zum Download. Zum Anbieter. Das funktioniert mit diversen Voreinstellungen. Stille Kamera downloaden! Ihr wollt eure schlafende Katze fotografieren? Ob sich der Download lohnt, verraten wir euch hier. Mit Microsoft Pix erstellt ihr perfekte Bilder in jeder Situation. Hipstamatic downloaden und einzigartige Bildstile auf eurem iPhone aktivieren. Mehr Infos.
ACDSee Photo Studio Ultimate Crack Download – Free Download 4 Paid Software
First Download ACDSee Photo Studio Ultimate Keygen form below Links. If You are using the Old version Please Uninstall it With IObit Uninstaller Pro; After. ACDSee Photo Studio Ultimate Crack free download combines Uninstall it With Revo Uninstaller Pro; After the Download Install the Program As Normal.
Acdsee pro 10 gesichtserkennung free download
First Download ACDSee Photo Studio Ultimate Keygen form below Links. If You are using the Old version Please Uninstall it With IObit Uninstaller Pro; After. ACDSee Photo Studio Ultimate Crack free download combines Uninstall it With Revo Uninstaller Pro; After the Download Install the Program As Normal.