Quand est-ce que l'ondulation est devenue une partie de l'interaction humaine ?

Quand est-ce que l'ondulation est devenue une partie de l'interaction humaine ?



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Quand faire signe aux autres de dire bonjour ou au revoir est-il entré pour la première fois dans les archives historiques dans le cadre d'une forme d'interaction humaine acceptée par la culture ?


Je crois que cela s'est produit avant la civilisation humaine avancée. Le geste de la main permet d'attirer l'attention de quelqu'un de la manière la plus efficace et la plus prudente possible. Agiter la main peut attirer l'attention mais en même temps n'est pas trop visible. Cela aurait pu aider les premières civilisations lors de la chasse ou pendant la guerre.

Jonas Amédéo


Histoire de la danse

Dès les premiers instants de l'histoire humaine connue, la danse a accompagné d'anciens rituels, des rassemblements spirituels et des événements sociaux. En tant que conduit de transe, de force spirituelle, de plaisir, d'expression, de performance et d'interaction, la danse s'est infusée dans notre nature dès les premiers instants de notre existence - depuis le moment où les premières tribus africaines se sont couvertes de peinture de guerre jusqu'à la propagation de de la musique et de la danse aux quatre coins du monde. Sans aucun doute, la danse reste l'une des formes de communication les plus expressives que nous connaissions.

La plus ancienne preuve de l'existence de la danse provient des peintures rupestres datant de 9 000 ans qui ont été trouvées en Inde, qui représentent diverses scènes de chasse, d'accouchement, de rites religieux, d'enterrements et, plus important encore, de beuveries et de danses en commun. Étant donné que la danse elle-même ne peut pas laisser d'artefacts archéologiques clairement identifiables que l'on peut trouver aujourd'hui, les scientifiques ont recherché des indices secondaires, des mots écrits, des sculptures sur pierre, des peintures et des artefacts similaires. La période où la danse s'est répandue peut être attribuée au troisième millénaire avant JC, lorsque les Égyptiens ont commencé à utiliser la danse comme partie intégrante de leurs cérémonies religieuses. À en juger par les nombreuses peintures funéraires qui ont survécu à la dent du temps, les prêtres égyptiens ont utilisé des instruments de musique et des danseurs pour imiter des événements importants - des histoires de dieux et des motifs cosmiques d'étoiles et de soleil en mouvement.

Cette tradition s'est poursuivie dans la Grèce antique, où la danse était utilisée très régulièrement et ouvertement au public (ce qui a finalement amené la naissance du célèbre théâtre grec au 6ème siècle avant JC). Les peintures anciennes du 1er millénaire parlent clairement de nombreux rituels de danse dans la culture grecque, notamment celui précédant le début de chaque Jeux Olympiques, précurseur des Jeux Olympiques modernes. Au fil des siècles, de nombreuses autres religions ont infusé la danse au cœur de leurs rituels, comme la danse hindoue "Bharata Nhatyam" qui est pratiquée encore aujourd'hui.

Bien sûr, toutes les danses de ces temps anciens n'étaient pas destinées à des fins religieuses. Les gens ordinaires utilisaient la danse pour la célébration, le divertissement, la séduction et pour induire l'ambiance d'euphorie frénétique. La célébration annuelle en l'honneur du dieu grec du vin Dionysos (et plus tard du dieu romain Bacchus) comprenait des danses et des boissons pendant plusieurs jours. Une peinture égyptienne de 1400 ans avant JC montrait le groupe de filles légèrement vêtues qui dansaient pour la foule masculine aisée, soutenues par plusieurs musiciens. Ce genre de divertissement a continué à être raffiné, jusqu'à l'époque médiévale et au début de la Renaissance, lorsque le ballet est devenu une partie intégrante de la classe aisée.

Les danses européennes avant le début de la Renaissance n'étaient pas largement documentées, seuls quelques fragments isolés de leur existence subsistent aujourd'hui. La danse "en forme de chaîne" la plus basique pratiquée par les roturiers était la plus répandue à travers l'Europe, mais l'arrivée de la Renaissance et de nouvelles formes de musique a amené de nombreux autres styles à la mode. Les danses de la Renaissance d'Espagne, de France et d'Italie ont rapidement été dépassées par les danses baroques qui sont devenues très populaires dans les cours françaises et anglaises. Après la fin de la Révolution française, de nombreux nouveaux types de danses ont émergé, axés sur des vêtements féminins moins restrictifs et une tendance à sauter et à sauter. Ces danses sont rapidement devenues encore plus énergiques en 1844 avec le début de ce qu'on appelle "l'engouement international de la polka" qui nous a également apporté la première apparition de la célèbre valse.

Après la courte période de temps où les grands maîtres de salle de bal ont créé une vague de danses compliquées, l'ère de la danse moderne à deux a commencé avec les carrières des célèbres danses de salon Vernon et Irene Castle. Après ces premières années du 20e siècle, de nombreuses danses modernes ont été inventées (Foxtrot, One-Step, Tango, Charleston, Swing, Postmodern, Hip-hop, breakdance et plus) et l'expansion de la comédie musicale a fait de ces danses une popularité mondiale.


Une brève histoire de Technologie d'interaction homme-machine

Cet article résume le développement historique des avancées majeures dans la technologie de l'interaction homme-machine, en soulignant le rôle central de la recherche universitaire dans l'avancement du domaine.

Copyright (c) 1996 -- Université Carnegie Mellon

Un court extrait de cet article est paru dans le cadre de "Strategic Directions in Human Computer Interaction", édité par Brad Myers, Jim Hollan, Isabel Cruz, ACM Computing Surveys, 28 (4), décembre 1996

Cette recherche a été en partie parrainée par le NCCOSC sous le contrat n° N66001-94-C-6037, l'Arpa Order n° B326 et en partie par la NSF sous le numéro de subvention IRI-9319969. Les opinions et conclusions contenues dans ce document sont celles des auteurs et ne doivent pas être interprétées comme représentant les politiques officielles, expresses ou implicites, du NCCOSC ou du gouvernement des États-Unis.

Mots-clés : interaction homme-machine, histoire, interfaces utilisateur, techniques d'interaction.

La recherche en interaction homme-machine (IHM) a connu un succès spectaculaire et a fondamentalement changé l'informatique. Un seul exemple est l'interface graphique omniprésente utilisée par Microsoft Windows 95, qui est basé sur le Macintosh, qui est basé sur les travaux de Xerox PARC, qui à son tour est basé sur les premières recherches au Stanford Research Laboratory (maintenant SRI) et au Massachusetts Institute of Technology. Un autre exemple est que pratiquement tous les logiciels écrits aujourd'hui utilisent des boîtes à outils d'interface utilisateur et des constructeurs d'interfaces, des concepts qui ont d'abord été développés dans les universités. Même la croissance spectaculaire du World-Wide Web est le résultat direct de la recherche HCI : l'application de la technologie hypertexte aux navigateurs permet de parcourir un lien à travers le monde d'un simple clic de souris. Les améliorations de l'interface plus que toute autre chose ont déclenché cette croissance explosive. De plus, les recherches qui mèneront aux interfaces utilisateur des ordinateurs de demain se déroulent dans les universités et quelques laboratoires de recherche d'entreprise.

Cet article tente de résumer brièvement bon nombre des développements importants de la recherche dans le domaine de la technologie d'interaction homme-machine (IHM). Par « recherche », j'entends le travail exploratoire dans les universités et les laboratoires de recherche gouvernementaux et d'entreprise (tels que Xerox PARC) qui n'est pas directement lié aux produits. Par "technologie HCI", je fais référence au côté informatique de HCI. Un article d'accompagnement sur l'histoire du « côté humain », discutant des contributions de la psychologie, du design, des facteurs humains et de l'ergonomie serait également approprié.

Une motivation pour cet article est de surmonter l'impression erronée qu'une grande partie du travail important dans l'interaction homme-machine a eu lieu dans l'industrie, et si la recherche universitaire en interaction homme-machine n'est pas soutenue, alors l'industrie continuera de toute façon. Ce n'est tout simplement pas vrai. Cet article tente de montrer que bon nombre des succès les plus célèbres de l'IHM développés par les entreprises sont profondément enracinés dans la recherche universitaire. En fait, pratiquement tous les principaux styles d'interface et applications d'aujourd'hui ont eu une influence significative de la recherche dans les universités et les laboratoires, souvent avec le financement du gouvernement. Pour illustrer cela, cet article énumère les sources de financement de certaines des avancées majeures. Sans cette recherche, de nombreuses avancées dans le domaine de l'IHM n'auraient probablement pas eu lieu et, par conséquent, les interfaces utilisateur des produits commerciaux seraient beaucoup plus difficiles à utiliser et à apprendre qu'elles ne le sont aujourd'hui. Comme décrit par Stu Card :

« Le financement public des technologies avancées d'interaction homme-machine a permis de construire le capital intellectuel et de former les équipes de recherche pour des systèmes pionniers qui, sur une période de 25 ans, ont révolutionné la façon dont les gens interagissent avec les ordinateurs. Laboratoires de recherche industrielle au niveau de l'entreprise à Xerox, IBM, AT&T et d'autres ont joué un rôle important dans le développement de cette technologie et dans sa mise en forme adaptée au domaine commercial." [6, p. 162]).

La figure 1 montre les chronologies de certaines des technologies abordées dans cet article. Bien sûr, une analyse plus approfondie révélerait de nombreuses interactions entre l'université, la recherche d'entreprise et les flux d'activités commerciales. Il est important de comprendre que des années de recherche sont impliquées dans la création et la préparation de ces technologies pour une utilisation généralisée. Il en sera de même pour les technologies HCI qui fourniront les interfaces de demain.

Il est clairement impossible d'énumérer tous les systèmes et sources dans un article de cette envergure, mais j'ai essayé de représenter les systèmes les plus anciens et les plus influents. Bien qu'il existe un certain nombre d'autres enquêtes sur les sujets HCI (voir, par exemple [1] [10] [33] [38]), aucune ne couvre autant d'aspects que celle-ci, ou n'essaie d'être aussi complète pour trouver les influences originales. . Une autre ressource utile est la vidéo "All The Widgets", qui montre la progression historique d'un certain nombre d'idées d'interface utilisateur [25].

Les technologies couvertes dans cet article incluent des styles d'interaction fondamentaux tels que la manipulation directe, le dispositif de pointage de la souris et les fenêtres, plusieurs types de domaines d'application importants, tels que le dessin, l'édition de texte et les feuilles de calcul, les technologies qui auront probablement le plus grand impact sur les interfaces du futur. , telles que la reconnaissance des gestes, le multimédia et la 3D et les technologies utilisées pour créer des interfaces à l'aide d'autres technologies, telles que les systèmes de gestion d'interface utilisateur, les boîtes à outils et les constructeurs d'interfaces.

Figure 1 : chronologies approximatives montrant où le travail a été effectué sur certaines technologies majeures abordées dans cet article.

  • Manipulation directe d'objets graphiques : L'interface de manipulation directe désormais omniprésente, où les objets visibles à l'écran sont directement manipulés avec un dispositif de pointage, a été démontrée pour la première fois par Ivan Sutherland dans Sketchpad [44], qui était sa thèse de doctorat au MIT en 1963. SketchPad prenait en charge la manipulation d'objets à l'aide d'un crayon optique, y compris la saisie d'objets, leur déplacement, leur modification de taille et l'utilisation de contraintes. Il contenait les germes d'une myriade d'idées d'interface importantes. Le système a été construit à Lincoln Labs avec le soutien de l'Air Force et de la NSF. Le Reaction Handler de William Newman [30], créé à l'Imperial College de Londres (1966-67) a fourni une manipulation directe des graphiques et a introduit les « Light Handles », une forme de potentiomètre graphique, qui était probablement le premier « widget ». Un autre système précoce était AMBIT/G (mis en œuvre aux Lincoln Labs du MIT, 1968, financé par l'ARPA). Il utilisait, entre autres techniques d'interface, des représentations iconiques, la reconnaissance des gestes, des menus dynamiques avec des éléments sélectionnés à l'aide d'un dispositif de pointage, la sélection d'icônes par pointage et des styles d'interaction avec et sans mode. David Canfield Smith a inventé le terme « icônes » dans sa thèse de doctorat à Stanford en 1975 sur Pygmalion [41] (financée par l'ARPA et le NIMH) et Smith a ensuite popularisé les icônes comme l'un des principaux concepteurs de la Xerox Star [42]. De nombreuses techniques d'interaction populaires dans les interfaces de manipulation directe, telles que la manière dont les objets et le texte sont sélectionnés, ouverts et manipulés, ont été étudiées à Xerox PARC dans les années 1970. En particulier, l'idée de "WYSIWYG" (ce que vous voyez est ce que vous obtenez) est née là-bas avec des systèmes tels que l'éditeur de texte Bravo et le programme de dessin Draw [10] Le concept d'interfaces de manipulation directe pour tout le monde a été imaginé par Alan Kay de Xerox PARC dans un article de 1977 sur le "Dynabook" [16]. Les premiers systèmes commerciaux à faire un usage intensif de la manipulation directe étaient le Xerox Star (1981) [42], l'Apple Lisa (1982) [51] et le Macintosh (1984) [52]. Ben Shneiderman à l'Université du Maryland a inventé le terme « Manipulation directe » en 1982 et a identifié les composants et a donné des fondements psychologiques [40].
  • Programmes de dessin : Une grande partie de la technologie actuelle a été démontrée dans le système Sketchpad 1963 de Sutherland. L'utilisation d'une souris pour les graphiques a été démontrée dans NLS (1965). En 1968, Ken Pulfer et Grant Bechthold du Conseil national de recherches du Canada ont construit une souris en bois sur le modèle d'Engelbart et l'ont utilisée avec un système d'animation d'images clés pour dessiner toutes les images d'un film. Un film ultérieur, "Hunger" en 1971 a remporté un certain nombre de prix et a été dessiné à l'aide d'une tablette au lieu de la souris (financement de l'Office national du film du Canada) [3]. Markup de William Newman (1975) a été le premier programme de dessin pour l'Alto de Xerox PARC, suivi de peu par Draw de Patrick Baudelaire qui a ajouté la gestion des lignes et des courbes [10, p. 326]. Le premier programme de peinture informatique était probablement "Superpaint" de Dick Shoup au PARC (1974-75).
  • Reconnaissance des gestes : Le premier périphérique de saisie à stylet, la tablette RAND, a été financé par l'ARPA. Sketchpad utilisait des gestes au crayon lumineux (1963). Teitelman a développé en 1964 le premier système de reconnaissance de gestes pouvant être entraîné. Une démonstration très précoce de la reconnaissance des gestes a été le système GRAIL de Tom Ellis sur la tablette RAND (1964, financé par l'ARPA). Il était assez courant dans les systèmes basés sur des stylos lumineux d'inclure une certaine reconnaissance des gestes, par exemple dans le système AMBIT/G (1968 - financé par l'ARPA). Un éditeur de texte gestuel utilisant des symboles de relecture a été développé à la CMU par Michael Coleman en 1969. Bill Buxton de l'Université de Toronto étudie les interactions gestuelles depuis 1980. La reconnaissance gestuelle est utilisée dans les systèmes de CAO commerciaux depuis les années 1970. , et est devenu universellement connu avec l'Apple Newton en 1992.

Le domaine des outils logiciels d'interface utilisateur est actuellement très actif et de nombreuses entreprises vendent des outils. La plupart des applications d'aujourd'hui sont mises en œuvre à l'aide de diverses formes d'outils logiciels. Pour une enquête plus complète et une discussion sur les outils d'interface utilisateur, voir [26].

    UIMS et kits d'outils : (Il existe des bibliothèques logicielles et des outils qui prennent en charge la création d'interfaces en écrivant du code.) ). La plupart des premiers travaux ont été effectués dans des universités (Université de Toronto avec un financement du gouvernement canadien, George Washington Univ. avec un financement de la NASA, de la NSF, du DOE et du NBS, Brigham Young University avec un financement industriel, etc.). Le terme « UIMS » a été inventé par David Kasik chez Boeing (1982) [14]. Les premiers gestionnaires de fenêtres tels que Smalltalk (1974) et InterLisp, tous deux de Xerox PARC, étaient livrés avec quelques widgets, tels que des menus contextuels et des barres de défilement. Le Xerox Star (1981) a été le premier système commercial à disposer d'une large collection de widgets. L'Apple Macintosh (1984) a été le premier à promouvoir activement sa boîte à outils pour une utilisation par d'autres développeurs pour imposer une interface cohérente. Une des premières boîtes à outils C++ était InterViews [21], développé à Stanford (1988, financement industriel). Une grande partie de la recherche moderne est effectuée dans les universités, par exemple les projets Garnet (1988) [28] et Amulet (1994) [27] à la CMU (financé par l'ARPA) et subArctic à Georgia Tech (1996, financé par Intel et NSF ).

Il est clair que toutes les innovations les plus importantes dans l'interaction homme-machine ont bénéficié de la recherche à la fois dans les laboratoires de recherche des entreprises et dans les universités, financées en grande partie par le gouvernement. Le style conventionnel des interfaces utilisateur graphiques qui utilisent des fenêtres, des icônes, des menus et une souris et sont dans une phase de standardisation, où presque tout le monde utilise la même technologie standard et effectue juste des changements incrémentiels minuscules. Par conséquent, il est important que la recherche financée par les universités, les entreprises et le gouvernement se poursuive, afin que nous puissions développer la science et la technologie nécessaires pour les interfaces utilisateur du futur.

Un autre argument important en faveur de la recherche HCI dans les universités est que les étudiants en informatique doivent connaître les problèmes d'interface utilisateur. Les interfaces utilisateur seront probablement l'un des principaux avantages concurrentiels à valeur ajoutée de l'avenir, étant donné que le matériel et les logiciels de base deviennent des produits de base. Si les étudiants ne connaissent pas les interfaces utilisateur, ils ne répondront pas aux besoins de l'industrie. Il semble que ce n'est que par l'informatique que la recherche HCI se diffuse dans les produits. De plus, sans des niveaux appropriés de financement de la recherche universitaire en HCI, il y aura moins de titulaires de doctorat en HCI pour effectuer des recherches dans les laboratoires d'entreprise, et moins de diplômés de premier plan dans ce domaine seront intéressés à devenir professeurs, de sorte que les cours d'interface utilisateur nécessaires seront pas être offert.

À mesure que les ordinateurs deviennent plus rapides, une plus grande partie de la puissance de traitement est consacrée à l'interface utilisateur. Les interfaces du futur utiliseront la reconnaissance gestuelle, la reconnaissance et la génération de la parole, les « agents intelligents », les interfaces adaptatives, la vidéo et de nombreuses autres technologies actuellement étudiées par des groupes de recherche dans des universités et des laboratoires d'entreprise [35]. Il est impératif que cette recherche se poursuive et soit bien soutenue.

Je dois remercier un grand nombre de personnes qui ont répondu aux messages des versions précédentes de cet article sur la liste de diffusion annonces.chi pour leur aide très généreuse, et à Jim Hollan qui a aidé à éditer le court extrait de cet article. La plupart des informations contenues dans cet article ont été fournies par (par ordre alphabétique) : Stacey Ashlund, Meera M. Blattner, Keith Butler, Stuart K. Card, Bill Curtis, David E. Damouth, Dan Diaper, Dick Duda, Tim T.K. Dudley, Steven Feiner, Harry Forsdick, Bjorn Freeman-Benson, John Gould, Wayne Gray, Mark Green, Fred Hansen, Bill Hefley, D. Austin Henderson, Jim Hollan, Jean-Marie Hullot, Rob Jacob, Bonnie John, Sandy Kobayashi, savoirs traditionnels Landauer, John Leggett, Roger Lighty, Marilyn Mantei, Jim Miller, William Newman, Jakob Nielsen, Don Norman, Dan Olsen, Ramesh Patil, Gary Perlman, Dick Pew, Ken Pier, Jim Rhyne, Ben Shneiderman, John Sibert, David C. Smith, Elliot Soloway, Richard Stallman, Ivan Sutherland, Dan Swinehart, John Thomas, Alex Waibel, Marceli Wein, Mark Weiser, Alan Wexelblat et Terry Winograd. Des commentaires éditoriaux ont également été fournis par les personnes ci-dessus ainsi que par Ellen Borison, Rich McDaniel, Rob Miller, Bernita Myers, Yoshihiro Tsujino et les évaluateurs.

1. Baecker, R., et al. , "Une perspective historique et intellectuelle," dans Readings in Human-Computer Interaction: Toward the Year 2000, Second Edition , R. Baecker, et al. , éditeurs. 1995, Morgan Kaufmann Publishers, Inc. : San Francisco. p. 35-47.

2. Brooks, F. "The Computer "Scientist" as Toolsmith--Studies in Interactive Computer Graphics", dans Actes de la conférence IFIP. 1977. p. 625-634.

3. Burtnyk, N. et Wein, M., " Computer Generated Key Frame Animation . " Journal de la Society of Motion Picture and Television Engineers , 1971. 8 (3) : pp. 149-153.

4. Bush, V., « As We May Think. » The Atlantic Monthly, 1945. 176 (juillet) : pp. 101-108. Réimprimé et discuté dans interactions , 3(2), mars 1996, pp. 35-67.

5. Buxton, W., et al. "Vers un système complet de gestion d'interface utilisateur", dans Actes SIGGRAPH'83: Computer Graphics. 1983. Détroit, Michigan 17 . p. 35-42.

6. Card, S.K., "Pioneers and Settlers: Methods Used in Successful User Interface Design", dans Human-Computer Interface Design: Success Stories, Emerging Methods, and Real-World Context, M. Rudisill, et al. , éditeurs. 1996, Morgan Kaufmann Éditeurs : San Francisco. p. 122-169.

7. Coons, S. "An Outline of the Requirements for a Computer-Aided Design System", dans AFIPS Spring Joint Computer Conference. 1963. 23 . p. 299-304.

8. Engelbart, D. et English, W., "A Research Center for Augmenting Human Intellect. " Reproduit dans ACM SIGGRAPH Video Review, 1994. , 1968. 106

9. English, W.K., Engelbart, D.C., et Berman, M.L., "Display Selection Techniques for Text Manipulation. " IEEE Transactions on Human Factors in Electronics, 1967. HFE-8 (1)

10. Goldberg, A., éd. Une histoire des postes de travail personnels . 1988, Addison-Wesley Publishing Company : New York, NY. 537.

11. Goldberg, A. et Robson, D. « A Metaphor for User Interface Design », dans Actes de la 12e Conférence internationale d'Hawaï sur les sciences des systèmes. 1979. 1 . p. 148-157.

12. Henderson Jr, D.A. « L'environnement de conception d'interface utilisateur Trillium », dans Actes SIGCHI'86 : Facteurs humains dans les systèmes informatiques. 1986. Boston, MA. p. 221-227.

13. Johnson, T. "Sketchpad III: Three Dimensional Graphical Communication with a Digital Computer", dans AFIPS Spring Joint Computer Conference. 1963. 23 . p. 347-353.

14. Kasik, D.J. "Un système de gestion d'interface utilisateur", dans Actes SIGGRAPH'82: Computer Graphics. 1982. Boston, MA. 16 . p. 99-106.

15. Kay, A., Le moteur réactif. Thèse de doctorat, Génie électrique et informatique Université de l'Utah, 1969,

16. Kay, A., "Personal Dynamic Media. " IEEE Computer, 1977. 10 (3) : pp. 31-42.

17. Koved, L. et Shneiderman, B., « Menus intégrés : sélection d'éléments en contexte. » Communications de l'ACM, 1986. 4 (29) : pp. 312-318.

18. Levinthal, C., "Molecular Model-Building by Computer . " Scientific American, 1966. 214 (6) : pp. 42-52.

19. Levy, S., Hackers: Heroes of the Computer Revolution. 1984, Garden City, NY : Anchor Press/Doubleday.

20. Licklider, J.C.R. et Taylor, R.W., "The computer as Communication Device . " Sci. Technologie. , 1968. Avril : pp. 21-31.

21. Linton, M.A., Vlissides, J.M., et Calder, P.R., "Composing user interfaces with InterViews. " IEEE Computer, 1989. 22 (2) : pp. 8-22.

22. Meyrowitz, N. et Van Dam, A., "Interactive Editing Systems: Part 1 and 2 . " ACM Computing Surveys, 1982. 14 (3) : pp. 321-352.

23. Myers, B.A., "The User Interface for Sapphire. " IEEE Computer Graphics and Applications, 1984. 4 (12) : pp. 13-23.

24. Myers, B.A., "A Taxonomy of User Interfaces for Window Managers. " IEEE Computer Graphics and Applications, 1988. 8 (5) : pp. 65-84.

25. Myers, B.A., "Tous les widgets. " Revue vidéo SIGGRAPH, 1990. 57

26. Myers, B.A., "User Interface Software Tools. " ACM Transactions on Computer Human Interaction, 1995. 2 (1) : pp. 64-103.

27. Myers, B.A. et al. , Le manuel de référence Amulet V2.0 . Carnegie Mellon University Computer Science Department Report, numéro, février 1996. Système disponible sur http://www.cs.cmu.edu/

28. Myers, B.A. et al. , "Garnet : Prise en charge complète des interfaces utilisateur graphiques hautement interactives. " Ordinateur IEEE, 1990. 23 (11) : pp. 71-85.

29. Nelson, T. « Une structure de fichier pour le complexe, le changement et l'indétermination », dans Actes de la Conférence nationale de l'ACM. 1965. p. 84-100.

30. Newman, W.M. "Un système pour la programmation graphique interactive", dans AFIPS Spring Joint Computer Conference. 1968. 28 . p. 47-54.

31. Nielsen, J., Multimédia et hypertexte : Internet et au-delà. 1995, Boston : Professionnel de la presse académique.

32. Palay, A.J., et al. "The Andrew Toolkit - An Overview", dans Actes de la conférence technique Winter Usenix. 1988. Dallas, Texas p. 9-21.

33. Press, L., "Avant l'Altaïr : l'histoire de l'informatique personnelle. " Communications de l'ACM, 1993. 36 (9) : pp. 27-33.

34. Reddy, D.R., "Speech Recognition by Machine: A Review", dans Readings in Speech Recognition , A. Waibel et K.-F. Lee, rédacteurs. 1990, Morgan Kaufmann : San Mateo, Californie. p. 8-38.

35. Reddy, R., « To Dream the Possible Dream (Turing Award Lecture). » Communications de l'ACM, 1996. 39 (5) : pp. 105-112.

36. Robertson, G., Newell, A., et Ramakrishna, K., ZOG : A Man-Machine Communication Philosophy. Rapport technique de l'Université Carnegie Mellon, numéro, août 1977.

37. Ross, D. et Rodriguez, J. « Fondements théoriques du système de conception assistée par ordinateur », dans AFIPS Spring Joint Computer Conference. 1963. 23 . p. 305-322.

38. Rudisill, M., et al. , Conception d'interfaces homme-machine : histoires de réussite, méthodes émergentes et contexte réel. 1996, San Francisco : Éditions Morgan Kaufmann.

39. Scheifler, R.W. et Gettys, J., « The X Window System. » ACM Transactions on Graphics, 1986. 5 (2) : pp. 79-109.

40. Shneiderman, B., "Direct Manipulation: A Step Beyond Programming Languages. " IEEE Computer, 1983. 16 (8): pp. 57-69.

41. Smith, D.C., Pygmalion : Un programme informatique pour modéliser et stimuler la pensée créative. 1977, Bâle, Stuttgart : Birkhauser Verlag. Thèse de doctorat, Département d'informatique de l'Université de Stanford, 1975.

42. Smith, D.C., et al. « L'interface utilisateur en étoile : une vue d'ensemble », dans Actes de la Conférence nationale sur l'informatique de 1982. 1982. AFIPS. p. 515-528.

43. Stallman, R.M., Emacs : L'éditeur d'affichage extensible, personnalisable et auto-documenté . Rapport du laboratoire d'intelligence artificielle du MIT, numéro, août 1979, 1979.

44. Sutherland, c'est-à-dire "SketchPad: A Man-Machine Graphical Communication System", dans AFIPS Spring Joint Computer Conference. 1963. 23 . p. 329-346.

45. Swinehart, D., et al. , "Une vue structurelle de l'environnement de programmation Cedar. " Transactions ACM sur les langages et systèmes de programmation, 1986. 8 (4): pp. 419-490.

46. ​​Swinehart, D.C., copilote : une approche à processus multiples pour les systèmes de programmation interactifs. Thèse de doctorat, Département d'informatique de l'Université de Stanford, 1974, SAIL Memo AIM-230 et CSD Report STAN-CS-74-412.

47. Teitelman, W., "A Display Oriented Programmer's Assistant. " International Journal of Man-Machine Studies, 1979. 11 : pp. 157-187. Aussi Xerox PARC Technical Report CSL-77-3, Palo Alto, CA, 8 mars 1977.

48. Tolliver, B., TVEdit. Rapport de mémo sur le partage du temps de Stanford, numéro, mars 1965.

49. van Dam, A., et al. "Un système d'édition hypertexte pour la 360," dans les actes de la conférence en infographie. 1969. Université de l'Illinois.

50. van Dam, A. et Rice, D.E., "On-line Text Editing: A Survey . " Computing Surveys, 1971. 3 (3) : pp. 93-114.

51. Williams, G., "The Lisa Computer System. " Byte Magazine, 1983. 8 (2) : pp. 33-50.

52. Williams, G., "The Apple Macintosh Computer . " Byte , 1984. 9 (2) : pp. 30-54.


Révolution de l'information

En 1982, l'Association for Computing Machinery (ACM) a reconnu le besoin croissant de prendre en compte les utilisateurs dans la conception de logiciels en créant le Special Interest Group on Computer-Human Interaction (SIGCHI). Peu de temps après, le domaine de l'interaction homme-machine (IHM) est devenu une sous-discipline reconnue de l'informatique.

Parce que concevoir comment les gens utilisent les systèmes numériques était si nouveau, et parce que la tâche nécessitait l'intégration de nombreux domaines de connaissances, c'est devenu un domaine de recherche dynamique dans de multiples domaines d'études (psychologie, sciences cognitives, architecture, bibliothéconomie, etc.). Au début, cependant, la création de logiciels nécessitait toujours les compétences d'un ingénieur. Cela a changé en 1993 avec le lancement du navigateur Web Mosaic, qui a donné vie à la vision de Tim Berners-Lee pour le World Wide Web. Internet existait depuis des années, mais la nature graphique du Web le rendait beaucoup plus accessible.

Le Web était un média entièrement nouveau, conçu dès le départ autour des réseaux et de la virtualité. Il a présenté une table rase de possibilités, ouverte à de nouvelles formes d'interaction, de nouvelles métaphores d'interface et de nouvelles possibilités d'expression visuelle interactive. Plus important encore, il était accessible à tous ceux qui voulaient créer leur propre coin du Web, en utilisant rien de plus que le simple langage de balisage hypertexte (HTML).

Dès le début, les navigateurs Web étaient toujours dotés d'une fonctionnalité « Afficher la source » qui permettait à quiconque de voir comment une page était construite. Cette ouverture, combinée à la faible courbe d'apprentissage du HTML, a fait qu'un flot de nouvelles personnes sans aucune formation en informatique ou en design a commencé à façonner la façon dont nous interagissons avec le Web.

Le Web a accéléré la révolution de l'information et accéléré l'idée que « l'information veut être gratuite ». Libre de partager, libre de copier et libre de tout physique. Microsoft Windows avait éloigné les logiciels des machines sur lesquelles ils s'exécutaient, mais le Web a poussé les environnements interactifs dans un domaine entièrement virtuel. Un site Web est accessible à partir de n'importe quel ordinateur, quels que soient sa taille, son type ou sa marque.

Au milieu des années 90, Filaire avait décrit les internautes comme des internautes, la socialisation dans la réalité virtuelle était une aspiration, et il y avait de plus en plus d'enthousiasme que le commerce électronique puisse remplacer les magasins physiques. Le récit du progrès de la fin du XXe siècle était lié à ce triomphe du virtuel sur le physique. L'avenir de la communication, de la culture et de l'économie semblait de plus en plus se jouer devant un clavier, dans le monde de l'autre côté de l'écran.

Se tenant sur les épaules des pionniers précédents, le flot de designers natifs du Web a utilisé le support même qu'ils étaient en train de créer pour définir de nouveaux modèles d'interaction et les meilleures pratiques. Le Web avait entraîné la phase de consommation de l'informatique, élargissant la portée et l'influence de la conception d'interaction à un niveau proche de celui de son cousin industriel plus ancien.


Travaux d'introduction

Cette section présente un échantillon des premiers travaux qui ont guidé la recherche sur la promotion des relations et des interactions interpersonnelles grâce à la technologie. Kiesler, et al. 1984 va au-delà de l'efficacité et des capacités techniques des technologies de communication informatique et donne un aperçu de la signification psychologique, sociale et culturelle de la technologie. Jones 1994 fournit un examen complet des divers aspects des relations sociales dans le cyberespace. Les études préliminaires qui fournissent des recommandations de meilleures pratiques pour l'adoption d'une intervention basée sur la technologie dans la pratique du travail social incluent Pardeck et Schulte 1990 Cwikel et Cnaan 1991 Schopler, et al. 1998 et Gonchar et Adams 2000. Lea et Spears 1995 Kraut, et al. 1998 et Nie et Erbring 2000 offrent un aperçu précoce de la façon dont Internet a commencé à façonner la façon dont les humains interagissent.

Cwikel, Julie et Ram Cnaan. 1991. Dilemmes éthiques dans l'application des technologies de l'information de deuxième vague à la pratique du travail social. Travail social 36.2: 114–120.

Ces auteurs examinent les dilemmes éthiques provoqués par l'utilisation des technologies de l'information dans la pratique du travail social. Ils examinent les effets sur la relation client-travailleur de l'utilisation des bases de données clients, des systèmes experts, des programmes thérapeutiques et des télécommunications.

Gonchar, Nancy et Joan R. Adams. 2000. Vivre dans le cyberespace : Reconnaître l'importance du monde virtuel dans les évaluations du travail social. Journal de l'éducation en travail social 36:587–600.

En utilisant l'approche de la personne dans son environnement, cette source explore les opportunités que la communication en ligne offre aux individus pour favoriser des relations, saines ou malsaines.

Jones, Steve, éd. 1994. CyberSociety : Communication et communauté assistées par ordinateur. Thousand Oaks, Californie : SAGE.

Explore la construction, la maintenance et la médiation des cybersociétés émergentes. Les aspects des relations sociales générées par la communication assistée par ordinateur sont discutés.

Kiesler, Sara, Jane Siegel et Timothy W. McGuire. 1984. Aspects psychologiques sociaux de la communication assistée par ordinateur. psychologue américain 39.10: 1123–1134.

Les auteurs présentent le comportement potentiel et les effets sociaux de la communication assistée par ordinateur.

Kraut, Robert, Michael Patterson, Vickie Lundmark, Sara Kiesler, Tridas Mukopadhyay, and William Scherlis. 1998. Internet paradox: A social technology that reduces social involvement and psychological well-being? psychologue américain 53.9: 1017–1031.

This study examines the positive and negative impacts of the Internet on social relationships, participation in community life, and psychological well-being. The implications for research, policy, and technology development are discussed.

Lea, Martin, and Russell Spears. 1995. Love at first byte? Building personal relationships over computer networks. Dans Understudied relationships: Off the beaten track. Edited by J. T. Wood and S. Duck, 197–233. Thousand Oaks, CA: SAGE.

This chapter focuses on the connection between personal relationships and computer networks. Previous studies that examine dynamics of online relationships are reviewed.

Nie, Norman H., and Lutz Erbring. 2000. Internet and society: A preliminary report. Stanford, CA: Stanford Institute for the Quantitative Study of Society.

This study presents the results of an early study that explores the sociological impact of information technology and the role of the Internet in shaping interpersonal relationships and interactions.

Pardeck, John T., and Ruth S. Schulte. 1990. Computers in social intervention: Implications for professional social work practice and education. Family Therapy 17.2: 109.

The authors discuss the impact of computer technology on aspects of social work intervention including inventory testing, client history, clinical assessment, computer-assisted therapy, and computerized therapy.

Schopler, Janice H., Melissa D. Abell, and Maeda J. Galinsky. 1998. Technology-based groups: A review and conceptual framework for practice. Social Work 43.3: 254–267.

The authors examine studies of social work practice using telephone and computer groups. Social work practice guidelines for technology-based groups are discussed.

Turkle, Sherry. 1984. The second self: Computers and the human spirit. New York: Simon & Schuster.

Explores the use of computers not as tools but as part of our social and psychological lives and how computers affect our awareness of ourselves, of one another, and of our relationship with the world.

Weizenbaum, Joseph. 1976. Computer power and human reason: From judgment to calculation. San Francisco: W. H. Freeman.

Examines the sources of the computer’s power including the notions of the brilliance of computers and offers evaluative explorations of computer power and human reason. The book presents common theoretical issues and applications of computer power such as computer models of psychology, natural language, and artificial intelligence.

Users without a subscription are not able to see the full content on this page. Please subscribe or login.


HCI surfaced in the 1980s with the advent of personal computing, just as machines such as the Apple Macintosh, IBM PC 5150 and Commodore 64 started turning up in homes and offices in society-changing numbers. For the first time, sophisticated electronic systems were available to general consumers for uses such as word processors, games units and accounting aids. Consequently, as computers were no longer room-sized, expensive tools exclusively built for experts in specialized environments, the need to create human-computer interaction that was also easy and efficient for less experienced users became increasingly vital. From its origins, HCI would expand to incorporate multiple disciplines, such as computer science, cognitive science and human-factors engineering.

HCI soon became the subject of intense academic investigation. Those who studied and worked in HCI saw it as a crucial instrument to popularize the idea that the interaction between a computer and the user should resemble a human-to-human, open-ended dialogue. Initially, HCI researchers focused on improving the usability of desktop computers (i.e., practitioners concentrated on how easy computers are to learn and use). However, with the rise of technologies such as the Internet and the smartphone, computer use would increasingly move away from the desktop to embrace the mobile world. Also, HCI has steadily encompassed more fields:

“…it no longer makes sense to regard HCI as a specialty of computer science HCI has grown to be broader, larger and much more diverse than computer science itself. HCI expanded from its initial focus on individual and generic user behavior to include social and organizational computing, accessibility for the elderly, the cognitively and physically impaired, and for all people, and for the widest possible spectrum of human experiences and activities. It expanded from desktop office applications to include games, learning and education, commerce, health and medical applications, emergency planning and response, and systems to support collaboration and community. It expanded from early graphical user interfaces to include myriad interaction techniques and devices, multi-modal interactions, tool support for model-based user interface specification, and a host of emerging ubiquitous, handheld and context-aware interactions.”

— John M. Carroll, author and a founder of the field of human-computer interaction.


The Science Of Human Connection And Wellness In A Digitally Connected World

The most precious commodities on this planet are our health, love, and happiness. Regardless of what we accomplish and accumulate in life, we are unable to take it with us.

In the fast paced, consumer driven, social media shared world that we live in today, success and happiness are often defined by the status of what we achieve, and the value of the things that we own.

Everywhere we look, we are inundated with the same message: the measure of our self-worth is directly equal to the measure of our material wealth.

Whether it’s the status car, the trendiest clothes, the luxury home or the CEO title that comes with the envied corner office with a view, these and the many other status symbols of wealth and success seem to forever define our value in our culture today, immortalized by the cinematic perfection of super heroes and super stars, and broadcasted through the perfectly curated lives that bombard us daily by “friends” on social media.

Fueled by equal parts aspiration and expectation, in an entirely odd and unusual way, envy has become the 21st Century’s most enduring economic driver, feeding our most persistent social cravings and endless material consumerism.

In our effort to keep up with all that is expected of us — and expected of ourselves — many of us find ourselves in perpetual motion, filling our days with the hyper-active, turbo-charged, “crazy busy” schedules that keep us struggling to eat healthy, find and maintain balance between our work, busy careers, and all that’s happening in our personal lives. And despite our success, when we achieve it, it seems that quality personal time for ourselves and for nurturing our relationships has become increasingly more elusive.

Psychologists see a pattern in this success driven culture of busyness and the associated “connection disconnection” of an increasingly digitally remote world, and it’s triggering what they say is rapidly becoming a dire epidemic of loneliness. In the elderly, this epidemic of loneliness is known as the “hidden killer.”

With our daily use of email, texting, smart phones, professional and social media, we live in an age of instant global connectivity. We are more connected to one another today than ever before in human history, yet somehow, we’re actually increasingly feeling more alone.

No longer considered a marginalized issue suffered by only the elderly, outcasts or those on the social fringe, the current wave of loneliness sweeping the nation is hitting much closer to home than you might think. And as shocking as it may seem, new research shows that loneliness may now be the next biggest public health crises to face Americans since the rise of obesity and substance abuse.

In fact, loneliness and its associated depression has become downright rampant, even amongst some of the most successful, with studies showing that business executives and CEOs may actually suffer at more than double the rate of the general public as a whole, which is already an astonishing twenty percent.

What’s more, this ever-growing loneliness among the hyper successful is not just a result of the social and professional isolation of living in a more global and digitized world, but rather it’s a “lonely at the top” malaise that’s spreading largely due to the sheer emotional exhaustion of business and workplace burnout.

Science is now sounding the alarm that there’s a significant correlation between feeling lonely and work exhaustion — and the more exhausted people are, the lonelier they feel. This, of course, is made worse by the ever-growing trend for a large segment of professionals who now work mobile and remotely.

Throughout history, human beings have inherently been social creatures. For millions of years we’ve genetically evolved to survive and thrive through the “togetherness” of social groups and gatherings. Today, modern communication and technology has forever changed the landscape of our human interaction, and as such, we often decline without this type of meaningful personal contact. Today’s highly individualistic, digitally remote, and material driven culture is now challenging all of this, as we turn to science to unlock the mysteries of human connection and wellness in a digitally connected world.

Connection of Disconnection

In a world where some of our most personal moments are “Shared” online with “Friends”, business meetings are replaced with digital “Hangouts”, and the most important breaking news is “Tweeted” online in a mere 140 characters or less, today we often seem much more captivated by flashing notifications on our mobile phones than what we’re actually experiencing outside of our tiny 5'.7" screens.

Mobile technologies ushered in by Internet icons like Google, who have literally defined what it means to have “the world’s information at your fingertips”, have no doubt brought us one step closer to truly living in a “Global Village”. However, no matter how small the world may seem to be getting, it now also feels like it’s often becoming a much less personable place to live in as well.

This also means understanding just how much the “connection disconnection” of loneliness negatively impacts our health, and to begin attending to signs and symptoms of loneliness with preventative measures, the very same way we would do with diet, exercise, and adequate sleep.

Dr. John Cacioppo, PhD, is a Professor of Neuroscience and director of the Center for Cognitive and Social Neuroscience at the University of Chicago, and a leading researcher on the effects on loneliness and human health. According to Dr. Cacioppo, the physical effects of loneliness and social isolation are as real as any other physical detriment to the body — such as thirst, hunger, or pain. “For a social species, to be on the edge of the social perimeter is to be in a dangerous position,” says Dr. Cacioppo, who is the co-author of the best-selling book “Loneliness: Human Nature and the Need for Social Connection”, hailed by critics to be one of the most important books about the human condition to appear in a decade.

Loneliness changes our thoughts, which changes the chemistry of our brains, says Dr. Cacioppo. “The brain goes into a self-preservation state that brings with it a lot of unwanted side effects.” This includes increased levels of cortisol, the stress hormone that can predict heart death due to its highly negative effects on the body. This increase in cortisol triggers a host of negative physical effects — including a persistent disruption in our natural patterns of sleep, according to Dr. Cacioppo. “As a result of increased cortisol, sleep is more likely to be interrupted by micro-awakenings,” reducing our ability to get enough quality sleep, that in time begins to erode our overall greater health and well-being.

One of the most important discoveries of Dr. Cacioppo’s research, is the Epigenetic impacts that loneliness has on our genes. In his recent studies, tests reveal how the emotional and physical impacts of loneliness actually trigger cellular changes that alter the gene expression in our bodies, or “what genes are turned on and off in ways that help prepare the body for assaults, but that also increases the stress and aging on the body as well.” This Epigenetic effect provides important clues in improving our understanding of the physical effects of loneliness, and in an increasingly remote and digitally connected world, minding our digital footprint and ensuring that we cultivate real and meaningful relationships with others may hold the key to keeping us healthy and keeping the onset of loneliness at bay.

Social Media’s Alone Together

Worldwide, there are over 2.01 billion active monthly users of social media, and of the 300 million of us in the United States, sometimes it feels like we’ve all just become new “Friends” on Facebook.

With so many of us being “Friends” and so well connected, you’d think that our social calendars would be totally full.

But the sad truth is that for all of the social media friends that we may have out in cyberspace, studies show that social media usage is actually making us less socially active in the real world, and Americans in particular are finding themselves lonelier than ever.

According to a recent study by sociologists at Duke University and the University of Arizona, published by American Sociological Review, American’s circle of close friends and confidants has shrunk dramatically over the past two decades, and the number who say they have no one outside of their immediate family to discuss important matters with has more than doubled, reaching a shocking 53.4% — up 17% since the dawn of the Internet and social media.

What’s more, nearly a quarter of those surveyed say they have no close friends or confidantes at all — a 14% percent increase since we all became so digitally connected.

Looking at the stats, we should ask ourselves, are digital communication technologies and social media platforms like Facebook and Twitter helping us or actually hurting us?

Many experts seem to feel the latter, and see a clear pattern with social media use and the decline in social intimacy, contributing greatly to today’s social and personal breakdown.

In her recent book “Alone Together: Why We Expect More from Technology and Less from Each Other”, MIT Professor Dr. Sherry Tuckle, PhD argues the case that this just may be so.

Dr. Turkle puts forth a host of pretty convincing signs that technology is threatening to dominate our lives and make us less and less social as humans. Dans Alone Together, she warns us that in only just a few short years, technology has now become the architect of our intimacies. “Online, we fall prey to the illusion of companionship, gathering thousands of Twitter and Facebook friends, and confusing tweets and wall posts with authentic communication.” But this relentless online digital connection is not at all real social intimacy, and leads us to a deep feeling of solitude.

Compounding matters is the added burden of increasingly busy schedules. People are now working very long hours — far more than in any recent history — and many feel that the only way that they can make social contact is online via social media or even online dating apps — which they often feel is faster and cheaper than actually going out for an intimate connection in person. Many even prefer the limited effort necessary to maintain digital friendships, verses live interpersonal relationships, which allows them to feel connected — but actually still remain somewhat disconnected.

This is perhaps ever more apparent with a new generation of Americans who have grown up with smartphones and social media, and as a result, may have even lost some fundamental social skills due to excessive online and social media use.

Dr. Brian Primack, PhD is the director of the Center for Research on Media, Technology and Health at the University of Pittsburgh, and co-author of a study published by the American Journal of Preventive Medicine, which shows that those who spend the most time digitally connecting on social media — more than two hours a day — had more than twice the odds of feeling socially isolated and lonely, compared to those who spend only a half hour per day. While real life face-to-face social connectedness seems to be strongly associated with feelings of well-being, the study shows that this naturally expected outcome seems to change when our interactions happen virtually. These results seemed very much to be counterintuitive — yet somehow this negative outcome is entirely consistent and true.

Dr. Primack’s earlier research on the connection of social media use and depression in young adults seemed to confirm what many already suspected, that our self-esteem can easily take a nosedive each time we log in to a social media network. There is a natural tendency to compare our lives to those we see online, and when we see others seemingly living the life of our dreams, it’s human nature not to feel just a little bit envious. However, if left unchecked, that envy can quickly turn into low self-esteem — and that can quickly spiral into depression. And like a vicious cycle, the more depressed and the lower our self-esteem, the lonelier we feel.

Meanwhile, a recent study found that those who gave up Facebook for even just a week felt much happier, less lonely and less depressed at the end of the study then other participants who continued using it.

The message is clear, that it’s important to use social media in positive ways. It’s a strong reminder of the importance of establishing real and meaningful interpersonal friendships, versus isolating ourselves in the digital social world. Real life interactions help us to build lasting relationships that fulfill our innate human need to form bonds and feel connected.

The solution, experts say, is that we have to begin to recognize the inherent pitfalls of social media and begin to utilize our online time in more positive ways that enhance our relationships — not detract from them. Social media can actually be a positive step toward building a “Global Village”, if we make it so.

It all depends on how we choose to interact online. It’s important to remember this, in our ever-busy quest for success in our increasingly digitally connected lives.

Connect With Your Friends The Old Fashioned Way — Device Free.

I have established really strong boundaries to have device free outings on date nights, also with my friends or if I am having a business meeting. Let me clarify, a device can be present however it must be switched off completely and preferably out of sight.

I have one friend who I visit sometimes. She is unable or unwilling to hear the boundaries that I would like to have regarding our device free get togethers. She is really smart and quite amazing, we will talk for about 10 minutes and we will be having a delightful deep meaningful conversation, and then like a merciless predator she preys on her phone like she going in for the kill and starts in on her social media. She is an addict. I overtly exit “stage left.” She is disappointed that I leave. This is the only way I can train her with regards to having a device free get together. The lengths of conversation have actually gotten longer since I have been doing that. When we go out for dinner she has to leave her phone at her home otherwise she is unable to help herself to her phone. The question I ask her is, “Dinner with Marina or will it be Dinner with your Phone?” She does opt for Dinner with Marina.

Self Love is one of the most important loves of all. When we learn to love ourselves completely, then we can truly love others

Connect with Your Friends & Loved Ones & Disconnect from Loneliness

01. Choose Self Love & Practice Self Love With Regards To How You Want It To Show Up In Your Life.

02. Choose To Be Worthy & Deserving Of Being Loved By Others On Your Own Terms.

03. Choose To Love People Unconditionally With Strong Boundaries.

04. Choose To Love People Unconditionally Without Being Taken Advantage Of.

05. Choose To Celebrate Who You Are.

06. Choose To See Your Value & How Valuable You Are To Yourself & Others.

07. Choose To Have Self Worth & Self Esteem & Positive Self Deserving In All Areas of Your Life.

08. Choose To Be Empathic With Your Friends With Strong Boundaries.

09. Choose To Be A Great Listener.

10. Choose To Be Worthy & Deserving To Be Listened To & Be Heard.

11. Choose To Be A Good Friend Without Being Taken Advantage Of.

12. Choose To Be Respectful, Present and Mindful With Your Friends.

13. Choose To Speak Your Truth With Emotional Intelligence.

14. Choose To Have Confidence In All Areas of Your Life.

15. Choose To Authentically Live Your Own Personal Truth In All Areas of Your Life.


How Humans Became Social

Pour réviser cet article, visitez Mon profil, puis Afficher les histoires enregistrées.

Pour réviser cet article, visitez Mon profil, puis Afficher les histoires enregistrées.

By Elizabeth Pennisi, ScienceNOW

Look around and it's impossible to miss the importance of social interactions to human society. They form the basis of our families, our governments, and even our global economy. But how did we become social in the first place? Researchers have long believed that it was a gradual process, evolving from couples to clans to larger communities. A new analysis, however, indicates that primate societies expanded in a burst, most likely because there was safety in numbers.

It's a controversial idea, admits anthropologist and study author Susanne Shultz of the University of Oxford in the United Kingdom. "We're likely going to cause a bit of trouble."

Over the past several decades, researchers have gained tremendous insights into the evolution of social groups in bees and birds by comparing them with relatives with different social systems. In these animals, it seems that complex societies evolved in steps. Single individuals paired off or began living with a few offspring. These small groups gradually grew larger and more complicated, ultimately yielding complex organizations. Some anthropologists have assumed a similar history for primates.

Shultz and colleagues decided to test this idea. Their first task was figuring out which factors influenced the makeup of current primate societies. A common hypothesis is that the local environment shapes group structure. For example, food scarcity might drive individuals together so that they can help each other with hunting and foraging. But after combing the scientific literature on 217 primate species, the researchers noticed that closely related species tended to organize their societies in the same way, no matter where they lived. Baboons and macaques, for example, inhabit many places and habitats, yet for the most part they always live in a mixed company of related females and unrelated males.

Because group structure was not at the whims of the environment, Shultz and colleagues reasoned, it must be passed down though evolutionary time. And indeed, when they looked across the primate family tree, they found that the current social behaviors of a species tended to be similar to those of its ancestors.

With this in mind, the researchers inferred how the ancestors of these primates lived, trying to come up with the scenario that would require the fewest evolutionary changes to get to the current distribution of social organizations in the family tree. They ran a statistical model to determine what would happen, say, if the last common ancestor to the monkeys and apes lived in pairs or lived in groups.

To the researchers' surprise, the most sensible solution suggested that the solitary ancestor started banding together not in pairs, as scientists had thought, but as loose groups of both sexes, as the team reports online today in La nature. Given the modern distribution of social organizations, the most likely time for this shift was around 52 million years ago, when the ancestors of monkeys and apes split off from the ancestors of lemurs and other prosimian primates.

Shultz suspects that, at this time, the nocturnal ancestors of today's primates became more active during the day. It's easier to sneak around at night when you're alone, she notes, but when you start hunting during the day, when predators can more easily spot you, there's safety in numbers.

But not all of today's primates live in large, mixed-sex groups. A few, such as the New World titi monkeys, live in pairs. And some primates, such as gorillas, form harems with one male and multiple females. The analysis shows that these social structures showed up only about 16 million years ago.

"When I read the paper, I was really quite struck with what a different picture [it] gives us," says Joan Silk, an anthropologist at the University of California, Los Angeles. "[Some] theoretical models will have to be revised."

Bernard Chapais, a primatologist at the University of Montreal in Canada is impressed with how many primates the analysis included. "It's close to the total number of species in the primate order," he says. He agrees with Shultz's scenario, but he and Silk would have liked to see Shultz consider more details, such as the mode of reproduction, when classifying social systems—something she plans to do. Even without that refinement, however, "these analyses represent a welcome addition to the current study of social evolution," says Peter Kappeler, an anthropologist at the University of Göttingen in Germany.

This story provided by ScienceNOW, the daily online news service of the journal Science.

Image: Gelada baboons (Theropithecus gelada) live in large, mix-sexed groups. Dave Watts/Flickr


The Invention of World History

For most of history, different peoples, cultures and religious groups have lived according to their own calendars. Then, in the 11th century, a Persian scholar attempted to create a single, universal timeline for all humanity.

The baptism of Christ, from The Chronology of Ancient Nations, 1307. (Edinburgh University Library/Bridgeman Images)

T oday, it is taken for granted that ‘World History’ exists. Muslims, Jews and Chinese each have their own calendars and celebrate their own New Year’s Day. But for most practical matters, including government, commerce and science, the world employs a single common calendar. Thanks to this, it is possible to readily translate dates from the Chinese calendar, or from the Roman, Greek or Mayan, into the same chronological system that underlies the histories of, say, Vietnam or Australia.

This single global calendar enables us to place events everywhere on a single timeline. Without it, temporal comparisons across cultures and traditions would be impossible. It is no exaggeration to say that this common understanding of time and our common calendar system are the keys to world history.

It was not always the case. Most countries, cultures or religious groups have lived according to their own calendars. Each designated its own starting point for historical time, be it the Creation, Adam and Eve or some later event, such as the biblical Flood. Even when they acknowledged a common point in time, as did both Greeks and Persians with the birth of Alexander the Great, they differed about when that event took place.

The ancient Greeks pioneered the systematic study of history and, even today, Herodotus (c.484-425 BC) stands out for his omnivorous curiosity about other peoples and cultures. Throughout his Histoires he regales his readers with exotica gleaned from his extensive travels and enquiries. He explains how each culture preserves and protects its own history. He reports admiringly on how the Egyptians maintained lists of their kings dating back 341 generations. His implication is that all customs and traditions are relative. Yet for two reasons the broad-minded Herodotus, whom Cicero called ‘the Father of History’, stopped short of asking how one might coordinate or integrate the Egyptian and Greek systems of time and history, or those of any other peoples.

For all his interest in diverse peoples and cultures, Herodotus wrote for a Greek audience. The structure of his Histoires allowed ample space for digressions that would inform or amuse his readers, but differing concepts of time were not among them. Herodotus and other Greeks of the Classical age were curious about the larger world, but ultimately their subject was Greece and they remained content to view the world through their own calendar. The same could be said for the other peoples of the ancient world. Each was so immersed in the particularities of its own culture that it would never have occurred to them to enquire into how other peoples might view historical time. Herodotus had come closer to perceiving the need for a world history than anyone before him.

Other ancient thinkers came as far as Herodotus, but no further. The Roman historian Polybius (200-118 BC) penned what he called a Universal History, embracing much of the Middle East, but he passed over differing concepts of history and time. Instead he shoehorned all dates into the four-year units of the Olympiads. This made his dates intelligible to Romans and Greeks but unintelligible to everyone else. Similarly, the Jewish historian Josephus (AD 37-100) took as his subject the interaction of Jews and Romans, two peoples with markedly different understandings of time. Having himself defected to the Roman side, he employed Roman chronology throughout his The Jewish War et Antiquities of the Jews and felt no need to correlate that system with the calendar of the Jews.

This, then, was the situation in the year 1000, when a largely unknown Central Asian scholar from Kath in the far west of modern Uzbekistan confronted the problem of history and time. Abu Rayhan Muhammad al-Biruni (973-1039) was an unlikely figure to take up so abstruse a task. Just 29 years old, he had written half a dozen papers on astronomy and geodesics. He was also involved in a vitriolic exchange in Bukhara with the young Ibn Sina, who later gained fame for his Canon of Medicine. But Biruni was a stranger to history and had never studied the many foreign cultures that had developed their own systems of time. Worse, he had lost several years fleeing a wave of civil unrest that swept the region. Fortunately for him, an exiled ruler from Gorgan near the Caspian Sea had been able to reclaim his throne and invited the promising young scientist to come and adorn his court. When that ruler, Qabus, asked Biruni to provide an explanation ‘regarding the eras used by different nations, and regarding the differences of their roots, i.e., … of the months and years on which they are based’, Biruni was not in a position to say no.

Biruni soon amassed religious and historical texts of the ancient Egyptians, Persians, Greeks and Romans and then gathered information on Muslims, Christians and Jews. His account of the Jewish calendar and festivals anticipated those of the Jewish philosopher Maimonides by more than a century. He also assembled evidence on the measurement of time and history from lesser-known peoples and sects from Central Asia, including his own Khwarazmians, a Persianate people with its own calendar system. In his research he called on his knowledge of languages, including Persian, Arabic and Hebrew, as well as his native Khwarazmian. For others he relied on translations or native informants.

In a decision that made his book as inaccessible to the general reader as it is valuable to specialists, Biruni included an overwhelming mass of detail on all known histories and calendar systems. The only ones excluded were those of India and China, about which he confessed he lacked sufficient written data. So thorough was Biruni that his Chronology of Ancient Peoples remains the sole source for much invaluable data on peoples as diverse as pre-Muslim Arabs, followers of various ‘false prophets’ and even Persians and Jews.

Biruni could have made it easier for his reader had he presented everything from just one perspective: his own. But this was not his way. Unlike Herodotus, who in the end adhered to a Greek perspective, or Persian writers who applied their own cultural measure to everyone else, Biruni began with the assumption that all cultures were equal. A relativist’s relativist, he surpassed all who preceded him in the breadth of his perspective. Who but Biruni would make a point of telling readers that he interviewed heretics?

It is not surprising, given his background. Khwarezm today is all but unknown. Yet 1,000 years ago it was a land of irrigated oases and thriving cities, which had grown rich on direct trade with India, the Middle East and China. Biruni’s home town of Kath was populated by Muslims, Zoroastrians, Christians and Jews, as well as traders from every part of Eurasia, including Hindus from the Indus Valley. It is unlikely that any part of the Eurasian land mass at the time spawned more people who accepted pluralism as a fact than Central Asia in general and Khwarezm in particular.

Had Biruni made only this affirmation, it is doubtful we would remember his Chronologie aujourd'hui. But he did not and for an important reason. Qabus had made clear that he wanted a Célibataire, simple system of time, so that henceforth he would not have to consult multiple books. He also wanted one that could be applied to business and commerce, as well as national history and lore. For his part, Biruni was glad to acknowledge that different peoples view time differently, but he insisted that there exists an objective basis for evaluating each system, namely the precise duration of a day, month and year as measured by science. An astronomer and mathematician, Biruni meticulously presented the best scientific evidence on the length of the main units of time and recalculated every date recorded in every system in terms of his new, autonomous measure.

Bewildering mess

No sooner did he launch into this monumental project than he found himself in a bewildering mess. ‘Every nation has its own [system of] eras’, he wrote, and none coincide. The confusion begins, he demonstrated, with the failure of some peoples –notably the Arabs – to understand that the only precise way to measure a day is when the sun is at the meridian: at noon or midnight. Errors in measuring a day in different cultures create months and then years of differing length. The result is a hopeless muddle.

Biruni seethed at the sheer incompetence he encountered on this crucial point. He then turned to the manner in which different peoples date the beginning of historic time and his anger turns to apoplexy. ‘Everything’, he thunders, ‘the knowledge of which is connected with the beginning of creation and with the history of bygone generations, is mixed up with falsification and myths.’ How can different peoples date creation as 3,000, 8,000 or 12,000 years ago? Even the Jews and Christians are at odds, with both of them following systems of time that are ‘obscurity itself’.

In a stunning aside, Biruni suggests that some of the errors may be traced to differences among biblical texts. Towards the Jews he is forgiving: ‘It cannot be thought strange that you should find discrepancies with people who have several times suffered so much from captivity and war as the Jews.’ But Christians, by trying to blend the Jewish and Greek systems, came up with an inexcusable chaos.

Biruni is no more kind to Arabs and Muslims. But while Muslims, Christians and Jews debate their differing dates for Adam and Eve and the biblical Flood, the Persians, deemed no less intelligent, deny that the Flood ever took place. Biruni concedes that pre-Muslim Arabs at least based their calendar on the seasons, but their system fell short of the Zoroastrian Persians. When he came across an Arab writer ‘Who was … very verbose … on the superiority of the Arabs to the Persians’, he opined: ‘I don’t know if he was really ignorant or only pretended to be.’

Such ridicule permeates Biruni’s Chronologie. Sometimes it is direct, though even more scathing when indirect. In chart after chart he lists the intervals between major world events according to the various religions and peoples. Typical is his chart for dating the lives of Adam and Eve, which no one could perceive as anything but pure foolishness. Everywhere, he concludes, ‘History is mixed with lies’, as are all the cultures of mankind. In a damning passage, Biruni lists what each religion and people prohibits, indicating the capriciousness and outright foolishness of most of the laws by which people seek to order their lives.

Reasoned knowledge

Seeking the cause of such nonsense, Biruni points to the almost universal refusal to base knowledge on reason. It is not just the unreason of the astrologer, ‘who is so proud of his ingenuity’, but of all the peoples and cultures of the world. The only ones to escape Biruni’s wrath are the Greeks, whom he describes as ‘deeply imbued with, and so clever in geometry and astronomy, and they adhere so strictly to logical arguments that they are far from having recourse to the theories of those who derive the basis of their knowledge from divine inspiration’.

Biruni pushed his query to its logical conclusion. A chief difference among competing calendar systems is the way they account – or fail to account – for the fact that an astronomical year is 365 days and six hours long. To assume any other length – to fail, for example, to add in that extra quarter of a day – causes all feasts and holidays to migrate in time gradually through the year. This is why the pre-Muslim Arabs’ month of fasting was fixed in the calendar, while Ramadan now moves throughout the year. Both problems can be rectified by adding to the calendar of 365 days an extra day every fourth year, or ‘leap year’.

Called ‘intercalation’, this simple process became a litmus test by which Biruni measured the intellectual seriousness of all cultures. He praised the Egyptians, Greeks, Chaldeans and Syrians for the precision of their intercalations, which came down to seconds. He was less generous towards the Jews and Nestorian Christians, even though their systems of intercalation were widely copied. He noted that in order to fix their market dates and holidays, the pre-Muslim Arabs had adopted from the Jews their primitive system of intercalation. Muhammad rejected this, saying that ‘Intercalation is only an increase of infidelity, by which the infidels lead people astray’. With astonishing bluntness, Biruni made known his view that it was simply a mistake for the Prophet Muhammad to have rejected the adjustment of the year to reflect astronomical reality. Carefully hiding behind the words of another author, Biruni concluded that this decision by Muhammad, based on the Quran itself, ‘did much harm to the people’. Some later adjustments were made, but they failed to address the core problem. ‘It is astonishing’, he fulminated, ‘that our masters, the family of the Prophet, listened to such doctrines.’

Directions of prayer

This was but one of Biruni’s ventures onto extremely sensitive ground. In another aside, he considers the Islamic custom of addressing prayers to the location of Mecca, termed the Kibla. After noting that Muslims had initially prayed to Jerusalem, he laconically observed that Manicheans pray towards the North Pole and Harranians to the South Pole. Thus armed, Biruni offered his conclusion by favourably quoting a Manichean who argued that ‘a man who prays to God does not need any Kibla at all’.

After these diversions, Biruni returned to his central task. He knew that commercial interchange requires a common system of dating events and that all interactions among peoples require a common system with which to reckon the passage of time. Moving from description to prescription, he set down steps by which the mess created by religion and national mythologies could be corrected, or at least alleviated. His method was to create a means of converting dates from one system to another. Biruni presented it in the form of a large circular graph or chart, which he termed a ‘chessboard’, showing the eras, dates and intervals according to each culture. Anyone who was ‘more than a beginner in mathematics’ could manipulate the chessboard so as to translate from one system to another. The method, he boasted, would be useful to both historians and astronomers.

Biruni was as impatient as he was hyperactive. Scarcely had he finished his assignment than he rushed back to his native Khwarezm in order to measure further eclipses and seek funding for even bigger projects.

We do not know if Biruni managed to keep a copy of his Chronologie and the calculator for all human history. The originals doubtless remained with Qabus. There is no reason to think that it gained wide dissemination, even in the Islamic world. If a copy reached the West before the 19th century, it remained unknown to scholarship and untranslated. Until a Leipzig scholar named Edward Sachau found a copy and translated it into English in 1879, Biruni’s Chronologie was largely forgotten. Today, three slightly differing copies are known, one in Istanbul, one in Leiden and a third, profusely illustrated, in the library of Edinburgh University. Efforts are underway in both Britain and Uzbekistan to combine all three in a modern edition.

Before the appearance of Biruni’s Chronologie there had been no universal history. Nor could it have been written, because there existed no unified matrix for measuring time that extended across religions and civilisations. Biruni’s was the first global calendar system and hence the essential tool for the construction of an integrated global history.

By grounding his concept of human history on the solid firmament of astronomy and reason, Biruni gave all peoples of the world a simple method for fixing dates on a single calendar system. Not until recent decades have thinkers applied the concept of a universal history to which Biruni’s Chronology of Ancient Nations opened the path.

The Cambridge scientist C.P. Snow delivered his celebrated Rede lecture on ‘The Two Cultures’ in 1959. His critique of modern learning called attention to what he saw as the breakdown of communication between science and the humanities. In spite of several generations of historians seeking to ground their work more solidly on scientific method, the rift persists.

Abu Rayhan Muhammad al-Biruni, writing a thousand years ago, issued the same cri de coeur. Yet, unlike Snow, this 29-year-old thinker from Central Asia not only decried the total absence of rational and scientific thought in history and the social sciences, but did more than anyone before him to correct this omission. Along with Pythagoras, he believed that ‘Things are numbers’.En appliquant cette maxime, il a ouvert la voie à un concept d'histoire universelle qui était auparavant impossible et a combiné les « Deux cultures » d'une manière qui mérite encore notre admiration.

S. Frédéric Starr est professeur-chercheur à la Paul H. Nitze School of Advanced International Studies de l'Université Johns Hopkins. Cet article a été publié pour la première fois dans le numéro de juillet 2017 de L'histoire aujourd'hui.


Voir la vidéo: Сухой голод. Свами Сат Марга - 27 дней без воды, больше 2-х месяцев без еды Dry fasting for 27 days