Unpatchable Nintendo Switch Exploit

 

A newly published exploit for the Nintendo Switch console is unpatchable.  The exploit can’t be fixed via a downloadable patch because the exploit makes use of a vulnerability in the USB recovery mode, circumventing the lock-out operations that would usually protect the bootROM. The flawed bootROM can’t be modified once the chip leaves the factory. Access to the fuses needed to configure the device’s ipatches was blocked when the ODM_PRODUCTION fuse was burned, so no bootROM update is possible.

Nintendo may be able to detect hacked systems when they sign into Nintendo’s online servers. Nintendo could then ban those systems from accessing the servers and disable the hacked Switch’s online functions.

You can read more about the Unpatchable Nintendo Switch Exploit at:

https://arstechnica.com/gaming/2018/04/the-unpatchable-exploit-that-makes-every-current-nintendo-switch-hackable/

Cyrix 5×86 CPU

Cyrix 5×86 (“M1sc”)

Despite having the same name as AMD’s 5×86 processor, the Cyrix 5×86 is a totally different animal. While AMD designed its 5×86 by further increasing the clock on the 486DX4, Cyrix took the opposite approach by modifying its M1 processor core (used for the 6×86 processor) to make a “lite” version to work on 486 motherboards. As such, the Cyrix 5×86 in some ways resembles a Pentium OverDrive (which is a Pentium core modified to work in a 486 motherboard) internally more than it resembles the AMD 5×86. This chip is probably the hardest to classify as either fourth or fifth generation.

The 5×86 employs several architectural features that are normally found only in fifth-generation designs. The pipeline is extended to six stages, and the internal architecture is 64 bits wide. It has a larger (16 KB) primary cache than the 486DX4 chip. It uses branch prediction to improve performance.

The 5×86 was available in two speeds, 100 and 120 MHz. The 5×86-120 is the most powerful chip that will run in a 486 motherboard–it offers performance comparable to a Pentium 90 or 100. The 5×86 is still a clock-tripled design, so it runs in 33 and 40 MHz motherboards. (The 100 MHz version will actually run at 50×2 as well, but normally was run at 33 MHz.) It is a 3 volt design and is intended for a Socket 3 motherboard. It will run in an earlier 486 socket if a voltage regulator is used. I have heard that some motherboards will not run this chip properly so you may need to check with Cyrix if trying to use this chip in an older board. These chips have been discontinued by Cyrix but are still good performers, and for those with a compatible motherboard, as good as you can get. Unfortunately, they are extremely difficult to find now.

Look here for an explanation of the categories in the processor summary table below, including links to more detailed explanations.

General Information

Manufacturer

Cyrix

Family Name

5×86

Code name

"M1sc"

Processor Generation

Fourth

Motherboard Generation

Fourth

Version

5×86-100

5×86-120

Introduced

1996?

Variants and Licensed Equivalents

Speed Specifications

Memory Bus Speed (MHz)

33 / 50

40

Processor Clock Multiplier

3.0 / 2.0

3.0

Processor Speed (MHz)

100

120

"P" Rating

P75

P90

Benchmarks

iCOMP Rating

~610

~735

iCOMP 2.0 Rating

~67

~81

Norton SI

264

316

Norton SI32

~16

19

CPUmark32

~150

~180

Physical Characteristics

Process Technology

CMOS

Circuit Size (microns)

0.65

Die Size (mm^2)

144

Transistors (millions)

2.0

Voltage, Power and Cooling

External or I/O Voltage (V)

3.45

Internal or Core Voltage (V)

3.45

Power Management

SMM

Cooling Requirements

Active heat sink

Packaging

Packaging Style

168-Pin PGA

Motherboard Interface

Socket 3; or 168-Pin Socket, Socket 1, Socket 2 (with voltage regulator)

External Architecture

Data Bus Width (bits)

32

Maximum Data Bus Bandwidth (Mbytes/sec)

127.2

152.6

Address Bus Width (bits)

32

Maximum Addressable Memory

4 GB

Level 2 Cache Type

Motherboard

Level 2 Cache Size

Usually 256 KB

Level 2 Cache Bus Speed

Same as Memory Bus

Multiprocessing

No

Internal Architecture

Instruction Set

x86

MMX Support

No

Processor Modes

Real, Protected, Virtual Real

x86 Execution Method

Native

Internal Components

Register Size (bits)

32

Pipeline Depth (stages)

6

Level 1 Cache Size

16 KB Unified

Level 1 Cache Mapping

4-Way Set Associative

Level 1 Cache Write Policy

Write-Through, Write-Back

Integer Units

1

Floating Point Unit / Math Coprocessor

Integrated

Instruction Decoders

1

Branch Prediction Buffer Size / Accuracy

!? entries / !? %

Write Buffers

!?

Performance Enhancing Features


The PC Guide
Site Version: 2.2.0 – Version Date: April 17, 2001
© Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

This is an archive of Charles M. Kozierok’s PCGuide (pcguide.com) which disappeared from the internet in 2018. We wanted to preserve Charles M. Kozierok’s knowledge about computers and are permanently hosting a selection of important pages from PCGuide.

History of NTFS

Overview and History of NTFS

In the early 1990s, Microsoft set out to create a high-quality, high-performance, reliable and secure operating system. The goal of this operating system was to allow Microsoft to get a foothold in the lucrative business and corporate market–at the time, Microsoft’s operating systems were MS-DOS and Windows 3.x, neither of which had the power or features needed for Microsoft to take on UNIX or other “serious” operating systems. One of the biggest weaknesses of MS-DOS and Windows 3.x was that they relied on the FAT file system. FAT provided few of the features needed for data storage and management in a high-end, networked, corporate environment. To avoid crippling Windows NT, Microsoft had to create for it a new file system that was not based on FAT. The result was the New Technology File System or NTFS.

It is often said (and sometimes by me, I must admit) that NTFS was “built from the ground up”. That’s not strictly an accurate statement, however. NTFS is definitely “new” from the standpoint that it is not based on the old FAT file system. Microsoft did design it based on an analysis of the needs of its new operating system, and not based on something else that they were attempting to maintain compatibility with, for example. However, NTFS is not entirely new, because some of its concepts were based on another file system that Microsoft was involved with creating: HPFS.

Before there was Windows NT, there was OS/2. OS/2 was a joint project of Microsoft and IBM in the early 1990s; the two companies were trying to create the next big success in the world of graphical operating systems. They succeeded, to some degree, depending on how you are measuring success. :^) OS/2 had some significant technical accomplishments, but suffered from marketing and support issues. Eventually, Microsoft and IBM began to quarrel, and Microsoft broke from the project and started to work on Windows NT. When they did this, they borrowed many key concepts from OS/2’s native file system, HPFS, in creating NTFS.

NTFS was designed to meet a number of specific goals. In no particular order, the most important of these are:

  • Reliability: One important characteristic of a “serious” file system is that it must be able to recover from problems without data loss resulting. NTFS implements specific features to allow important transactions to be completed as an integral whole, to avoid data loss, and to improve fault tolerance.
  • Security and Access Control: A major weakness of the FAT file system is that it includes no built-in facilities for controlling access to folders or files on a hard disk. Without this control, it is nearly impossible to implement applications and networks that require security and the ability to manage who can read or write various data.
  • Breaking Size Barriers: In the early 1990s, FAT was limited to the FAT16 version of the file system, which only allowed partitions up to 4 GiB in size. NTFS was designed to allow very large partition sizes, in anticipation of growing hard disk capacities, as well as the use of RAID arrays.
  • Storage Efficiency: Again, at the time that NTFS was developed, most PCs used FAT16, which results in significant disk space due to slack. NTFS avoids this problem by using a very different method of allocating space to files than FAT does.
  • Long File Names: NTFS allows file names to be up to 255 characters, instead of the 8+3 character limitation of conventional FAT.
  • Networking: While networking is commonplace today, it was still in its relatively early stages in the PC world when Windows NT was developed. At around that time, businesses were just beginning to understand the importance of networking, and Windows NT was given some facilities to enable networking on a larger scale. (Some of the NT features that allow networking are not strictly related to the file system, though some are.)

Of course, there are also other advantages associated with NTFS; these are just some of the main design goals of the file system. There are also some disadvantages associated with NTFS, compared to FAT and other file systems–life is full of tradeoffs. :^) In the other pages of this section we will fully explore the various attributes of the file system, to help you decide if NTFS is right for you.

For their part, Microsoft has not let NTFS lie stagnant. Over time, new features have been added to the file system. Most recently, NTFS 5.0 was introduced as part of Windows 2000. It is similar in most respects to the NTFS used in Windows NT, but adds several new features and capabilities. Microsoft has also corrected problems with NTFS over time, helping it to become more stable, and more respected as a “serious” file system. Today, NTFS is becoming the most popular file system for new high-end PC, workstation and server implementations. NTFS shares the stage with various UNIX file systems in the world of small to moderate-sized business systems, and is becoming more popular with individual “power” users as well.


The PC Guide
Site Version: 2.2.0 – Version Date: April 17, 2001
© Copyright 1997-2004 Charles M. Kozierok. All Rights Reserved.

This is an archive of Charles M. Kozierok’s PCGuide (pcguide.com) which disappeared from the internet in 2018. We wanted to preserve Charles M. Kozierok’s knowledge about computers and are permanently hosting a selection of important pages from PCGuide.

Commodore Plus/4

Plus/4 – 121 colors in 1984!

Model:           Commodore Plus/4 

Manufactured:    1984 

Processor:       7501/8501 ~0.88MHz when the raster beam is on the
visible screen and ~1.77MHz the rest of the time. (The TED chip
generates the processor frequency). The resulting speed is equal to the
vic-20. A PAL vic-20 is faster than this NTSC machine, but a PAL Plus/4
is just a little faster than a PAL vic-20.

Memory:          64Kb (60671 bytes available in Basic)

Graphics:        TED 7360 (Text Editing Device 7360 HMOS)
          
Hi-Resolution:   320x200                 
                 Colors: 121 (All can be visible at the same time)     
                 Hardware reverse display of characters     
                 Hardware blinking
                 Hardware cursor
                 Smooth scrolling
                 Multicolor 160x200
                 (No sprites)

Sound:           TED (7360)
                 2 voices (two tones or one tone + noise)
"OS"             Basic 3.5
Built in         Tedmon, software:
                 "3-plus-1" = word processor, spreadsheet, database and
                 graphs.

History and thoughts

The Plus/4 was called 264 as a prototype (January 1984) and was supposed to have customer selectable built in software. But they decided to ship all with the same built in software and rename the computer Plus/4 (June 1984). (The reason for the long delay was that Commodore’s factories were busy producing C64s). There was other versions available of the same “TED” computer (more or less): The C16 – Looks like a black Vic20 with white keys but is the same computer as the Plus/4, but with no built in software (except for Tedmon), only 16kb of ram, and no RS232. Why it looks like a vic-20 is because Commodore intended it as a replacement for the vic-20 when it was cancelled in 1984. There was also a C116 with the same features as the C16 but looked like a Plus/4 with rubber keys. About 400,000 Plus/4s were made (compared to 2,5 million vic-20s and something like 15 million C64s).

The reason why the Plus/4 wasn’t more popular was one: The C64! Commodore kind of competed with themselves. Let’s list the benefits with the two computers:

 Plus/4:
   * 121 colors (compared to c64's 16)
   * Very powerful basic
   * Built in machine language monitor
   * A little faster
   * Built in software
   * Lower price

 C64:
   * Sprite graphics
   * Better sound
   * Lots of software available
   * All your frieds have one
   * Your old vic-20 tape recorder will work without an adapter
   * Your old vic-20 joysticks will work without adapters

Well, which would you choose?

Well, Basic 3.5 is quite powerful. It has commands for graphics, sound, disk commands, error handling etc. I counted 111 commands/functions (compared to 70 for the C64). On the c64, POKE and PEEK is the only way to access graphics, sprites and sound. And with most of those registers being two bytes big and the chips a bit complex to set up, that is quite troublesome and time consuming for the basic. And drawing graphics with lines, circles etc using only basic on the c64 is just impossible (or would take a year!) On the other hand – if basic programming doesn’t interest you, but copying pirate copied games from your friends, then the c64 is your computer… I mean back then! 😉

There was more reasons than just the c64 for the Plus/4’s lack of success. There are many theories about this on the internet so instead of just repeating them, I would like to contribute with another one: The strange names! Why on earth name the same line of computers so differently! The Plus/4, C16 and C116 is more compatible than a vic-20 with and without memory expansion! And they even look different! I would have made two different computers:”TED-64″ (Plus/4) and”TED-16″ (The C16, but in a Plus/4 case).

They would also have normal joystick and tape ports (or adapters included with the computer). The 3-plus-1 software could have been left out and been sold separately on a cartridge to bring down the price of the computer. It could have been sold together with the computer in a bundle at a reduced price if you wanted to. This way the original 264 idea about customer selectable included software could have been doable with all the selectable software on different cartridges.


My impressions

I have just got the Plus/4, but my impression of it so far is very positive. It’s little and neat. I like the basic and the graphics. The computer has very much “Commodore” feeling. I would say it’s like a mix between the vic-20 (for the simplicity, one graphcis/sound chip and default colors), the C64 (for the similar graphics) and the C128 (for the powerful basic and the similarities with the 128’s VDC chip features like blinking etc.) The Plus/4 also have the Esc codes that the C128 has. The machine language monitor is also almost the same. But in the same time the Plus/4 is simple and easy to survey like the vic-20. I think it’s a well designed computer. The only thing I don’t like about the Plus/4 is the lack of a Restore key. But there are work-arounds (Runstop+reset for example). I have written some more tips about this in the manuals below.

The same people designing the Plus/4 (except for one) later designed the C128.

If you plan to get a Plus/4, then you might want to know that the 1541 diskdrive is working, the video cable is the same as for the c64 (at least composite and sound that my cable is using). But for joysticks, you need to make a little adapter, also for the tape recorder (if it isn’t of the black type that has a built in adapter).

My Plus/4 is a NTSC machine with a 110V power supply. And living in Sweden I needed to buy a 220->110v converter. The Plus/4 does not need the frequency from the PSU (like the C64), so a simple converter that generates 110v 50Hz is fine. My Plus/4 has a square power plug. Others have a round one, and then I could have used an European c64 power supply instead. There are of course PAL Plus/4s as well, but I got mine for free and I like the NTSC display too. No BIG border around the screen like on all PAL Commodores. The NTSC Plus/4 has also a little faster key repeat, so it feels a little faster even though the PAL version runs faster. BUT – There is MUCH more PAL software available it seems…


This is an archive of pug510w’s Dator Museum which disappeared from the internet in 2017. We wanted to preserve the knowledge about the Commodore Plus/4 and are permanently hosting a copy of Dator Museum.

Commodore Plus/4 Service Manual

Convert FAT Disks to NTFS

This article describes how to convert FAT disks to NTFS. See the Terms sidebar for definitions of FAT, FAT32 and NTFS. Before you decide which file system to use, you should understand the benefits and limitations of each of them.

Changing a volume’s existing file system can be time–consuming, so choose the file system that best suits your long–term needs. If you decide to use a different file system, you must back up your data and then reformat the volume using the new file system. However, you can convert a FAT or FAT32 volume to an NTFS volume without formatting the volume, though it is still a good idea to back up your data before you convert.

Note  Some older programs may not run on an NTFS volume, so you should research the current requirements for your software before converting.

Choosing Between NTFS, FAT, and FAT32

You can choose between three file systems for disk partitions on a computer running Windows XP: NTFS, FAT, and FAT32. NTFS is the recommended file system because it’s is more powerful than FAT or FAT32, and includes features required for hosting Active Directory as well as other important security features. You can use features such as Active Directory and domain–based security only by choosing NTFS as your file system.

Converting to NTFS Using the Setup Program

The Setup program makes it easy to convert your partition to the new version of NTFS, even if it used FAT or FAT32 before. This kind of conversion keeps your files intact (unlike formatting a partition).

Setup begins by checking the existing file system. If it is NTFS, conversion is not necessary. If it is FAT or FAT32, Setup gives you the choice of converting to NTFS. If you don’t need to keep your files intact and you have a FAT or FAT32 partition, it is recommended that you format the partition with NTFS rather than converting from FAT or FAT32. (Formatting a partition erases all data on the partition and allows you to start fresh with a clean drive.) However, it is still advantageous to use NTFS, regardless of whether the partition was formatted with NTFS or converted.

Converting to NTFS Using Convert.exe

A partition can also be converted after Setup by using Convert.exe. For more information about Convert.exe, after completing Setup, click Start, click Run, type cmd, and then press ENTER. In the command window, type help convert, and then press ENTER.

It is easy to convert partitions to NTFS. The Setup program makes conversion easy, whether your partitions used FAT, FAT32, or the older version of NTFS. This kind of conversion keeps your files intact (unlike formatting a partition.

To find out more information about Convert.exe

1.After completing Setup, click Start, click Run, type cmd, and then press ENTER.
2.In the command window, type help convert and then press ENTER.
Information about converting FAT volumes to NTFS is made available as shown below.
Converting FAT volumes to NTFS

To convert a volume to NTFS from the command prompt

1.Open Command Prompt. Click Start, point to All Programs, point to Accessories, and then click Command Prompt.
2.In the command prompt window, type: convert drive_letter: /fs:ntfs 

For example, typing convert D: /fs:ntfs would format drive D: with the ntfs format. You can convert FAT or FAT32 volumes to NTFS with this command.

Important  Once you convert a drive or partition to NTFS, you cannot simply convert it back to FAT or FAT32. You will need to reformat the drive or partition which will erase all data, including programs and personal files, on the partition.

Commodore Computer History Archive

As I train new our computer systems engineers I have found that few of them know anything about the Commodore home computer systems. In the early 1990s, when I first started getting into electronics and computers, Commodores were everywhere. By the mid 90s they were ancient relics. I always had five or six laying around the shop. Most were given to me for spare parts from customers. The majority of them had no issues, they were just out dated. For fun and to train new guys, we repaired many of them over the years. Over the years, less and less of our computer systems engineers had any experience on Commodores. Today, virtually no one under 35 knows what a Commodore computer system is.

The MOS 6502 chip

The reason why a 15 year old could work on a Commodore was that the systems were all based around simple CPUs. The MOS 6502 was very easy to diagnose issues with and repair. All I needed to work on the circuits was a simple analog volt meter and a reference voltage. Digital voltmeters were very expensive in the 1990s, I don’t think we had one until the late 90s.

For example, most prominent home computer systems and video game systems in the 1980s and 1990s had a MOS 6502 or a derivative within them. These derivative chips were called the 650x or the 6502 family of chips. The Commodore VIC-20, Commodore 64, Apple II, Atari 800, Atari 2600 and NES all had a 6502 or 650x chips in them. Almost everything made from the mid 1970s to the mid 1980s had a connection to the 6502 family. By the late 1980s newer and faster chips by Motorola and Intel replaced the MOS 6502 family as the primary go to processor.

Commodore History Disappearing

While I train new field engineers here at Karls Technology I have been looking online for reference materials about Commodores. Back in the 1990s reference material was available at the library, in hobby magazines and BBS’s. Today, I find very little good reference material about Commodores, MOS or the 6502 family of chips. Previously, you could find people that worked for MOS, Commodore or GMT around the internet. As those engineers of yesterday pass way their knowledge of the history of computing leaves us.

Before the days of blogs, much of the early computing history was recorded on early engineer’s personal websites. Those websites have gone offline or were hosted by companies that not longer exist.

Computer History Archive

Due to this knowledge leaving us and much of it only existing in an offline capacity; we decided to start archiving Commodore, 6502 family and other early computer history information. Therefore, we will scan and post below any knowledge we find in an offline repository. In addition, any historical personal websites about early computer history from yesteryear will be archived here. Our goal is to document as much early computer history as possible.

Text Editing Device TED 7360 Datasheet

Commodore Plus/4 Specifications

Commodore Plus/4 Service Manual

Commodore Semiconductor Group’s Superfund Site from the EPA

Designing Calm Technology by Mark Weiser, Xerox, 1995.

Designing Calm Technology

by Mark Weiser and John Seely Brown

Xerox PARC
December 21, 1995

Introduction

Bits flowing through the wires of a computer network are ordinarily invisible. But a radically new tool shows those bits through motion, sound, and even touch. It communicates both light and heavy network traffic. Its output is so beautifully integrated with human information processing that one does not even need to be looking at it or near it to take advantage of its peripheral clues. It takes no space on your existing computer screen, and in fact does not use or contain a computer at all. It uses no software, only a few dollars in hardware, and can be shared by many people at the same time. It is called the “Dangling String”.

Created by artist Natalie Jeremijenko, the “Dangling String” is an 8 foot piece of plastic spaghetti that hangs from a small electric motor mounted in the ceiling. The motor is electrically connected to a nearby Ethernet cable, so that each bit of information that goes past causes a tiny twitch of the motor. A very busy network causes a madly whirling string with a characteristic noise; a quiet network causes only a small twitch every few seconds. Placed in an unused corner of a hallway, the long string is visible and audible from many offices without being obtrusive. It is fun and useful. The Dangling String meets a key challenge in technology design for the next decade: how to create calm technology. 

We have struggled for some time to understand the design of calm technology, and our thoughts are still incomplete and perhaps even a bit confused. Nonetheless, we believe that calm technology may be the most important design problem of the twenty-first century, and it is time to begin the dialogue.

The Periphery

Designs that encalm and inform meet two human needs not usually met together. Information technology is more often the enemy of calm. Pagers, cellphones, newservices, the World-Wide-Web, email, TV, and radio bombard us frenetically. Can we really look to technology itself for a solution?

But some technology does lead to true calm and comfort. There is no less technology involved in a comfortable pair of shoes, in a fine writing pen, or in delivering the New York Times on a Sunday morning, than in a home PC. Why is one often enraging, the others frequently encalming? We believe the difference is in how they engage our attention. Calm technology engages both the center and the periphery of our attention, and in fact moves back and forth between the two.

We use “periphery” to name what we are attuned to without attending to explicitly. Ordinarily when driving our attention is centered on the road, the radio, our passenger, but not the noise of the engine. But an unusual noise is noticed immediately, showing that we were attuned to the noise in the periphery, and could come quickly to attend to it.

It should be clear that what we mean by the periphery is anything but on the fringe or unimportant. What is in the periphery at one moment may in the next moment come to be at the center of our attention and so be crucial. The same physical form may even have elements in both the center and periphery. The ink that communicates the central words of a text also, though choice of font and layout, peripherally clues us into the genre of the text. 

A calm technology will move easily from the periphery of our attention, to the center, and back. This is fundamentally encalming, for two reasons.

First, by placing things in the periphery we are able to attune to many more things than we could if everything had to be at the center. Things in the periphery are attuned to by the large portion of our brains devoted to peripheral (sensory) processing. Thus the periphery is informing without overburdening.

Second, by recentering something formerly in the periphery we take control of it. Peripherally we may become aware that something is not quite right, as when awkward sentences leave a reader tired and discomforted without knowing why. By moving sentence construction from periphery to center we are empowered to act, either by finding better literature or accepting the source of the unease and continuing. Without centering the periphery might be a source of frantic following of fashion; with centering the periphery is a fundamental enabler of calm through increased awareness and power.

Not all technology need be calm. A calm videogame would get little use; the point is to be excited. But too much design focuses on the object itself and its surface features without regard for context. We must learn to design for the periphery so that we can most fully command technology without being dominated by it. 

Our notion of technology in the periphery is related to the notion of affordances, due to Gibson by popularized by Norman. An affordance is a relationship between an object in the world and the intentions, perceptions, and capabilities of a person. The side of a door that only pushes out affords this action by offering a flat pushplate. The idea of affordance, powerful as it is, tends to describe the surface of a design. For us the term “affordance” does not reach far enough into the periphery where a design must be attuned to but not attended to.

Three signs of calm technology

Technologies encalm as they empower our periphery. This happens in two ways. First, as already mentioned, a calming technology may be one that easily moves from center to periphery and back. Second, a technology may enhance our peripheral reach by bringing more details into the periphery. An example is a video conference that, by comparison to a telephone conference, enables us to attune to nuances of body posture and facial expression that would otherwise be inaccessible. This is encalming when the enhanced peripheral reach increases our knowledge and so our ability to act without increasing information overload.

The result of calm technology is to put us at home, in a familiar place. When our periphery is functioning well we are tuned into what is happening around us, and so also to what is going to happen, and what has just happened. We are connected effortlessly to a myriad of familiar details. This connection to the world around we called “locatedness”, and it is the fundamental gift that the periphery gives us.

Examples of calm technology

To deepen the dialogue we now examine a few designs in terms of their motion between center and periphery, peripheral reach, and locatedness. Below we consider inner office windows, Internet Multicast, and once again the Dangling String.

inner office windows

We do not know who invented the concept of glass windows from offices out to hallways. But these inner windows are a beautifully simple design that enhances peripheral reach and locatedness. 

The hallway window extends our periphery by creating a two-way channel for clues about the environment. Whether it is motion of other people down the hall (its time for a lunch; the big meeting is starting), or noticing the same person peeking in for the third time while you are on the phone (they really want to see me; I forgot an appointment), the window connects the person inside to the nearby world.

Inner windows also connect with those who are outside the office. A light shining out into the hall means someone is working late; someone picking up their office means this might be a good time for a casual chat. These small clues become part of the periphery of a calm and comfortable workplace.

Office windows illustrate a fundamental property of motion between center and periphery. Contrast them with an open office plan in which desks are separated only by low or no partitions. Open offices force too much to the center. For example, a person hanging out near an open cubicle demands attention by social conventions of privacy and politeness. There is less opportunity for the subtle clue of peeking through a window without eavesdropping on a conversation. The individual, not the environment, must be in charge of moving things from center to periphery and back. 

The inner office window is a metaphor for what is most exciting about the Internet, namely the ability to locate and be located by people passing by on the information highway.

Internet Multicast

A technology called Internet Multicast may become the next World Wide Web (WWW) phenomenon. Sometimes called the MBone (for Multicast backBONE), multicasting was invented by a then graduate student at Stanford University, Steve Deering.

Whereas the World Wide Web (WWW) connects only two computers at a time, and then only for the few moments that information is being downloaded, the MBone continuously connects many computers at the same time. To use the familiar highway metaphor, for any one person the WWW only lets one car on the road at a time, and it must travel straight to its destination with no stops or side trips. By contrast, the MBone opens up streams of traffic between multiple people and so enables the flow of activities that constitute a neighborhood. Where the WWW ventures timidly to one location at a time before scurrying back home again, the MBone sustains ongoing relationships between machines, places, and people.

Multicast is fundamentally about increasing peripheral reach, derived from its ability to cheaply support multiple multimedia (video, audio, etc.) connections all day long. Continuous video from another place is no longer television, and no longer video-conferencing, but more like a window of awareness. A continuous video stream brings new details into the periphery: the room is cleaned up, something important may be about to happen; everyone got in late today on the east coast, must be a big snowstorm or traffic tie-up. 

Multicast shares with videoconferencing and television an increased opportunity to attune to additional details. Compared to a telephone or fax, the broader channel of full multimedia better projects the person through the wire. The presence is enhanced by the responsiveness that full two-way (or multiway) interaction brings. 

Like the inner windows, Multicast enables control of the periphery to remain with the individual, not the environment. A properly designed real-time Multicast tool will offer, but not demand. The MBone provides the necessary partial separation for moving between center and periphery that a high bandwidth world alone does not. Less is more, when less bandwidth provides more calmness. 

Multicast at the moment is not an easy technology to use, and only a few applications have been developed by some very smart people. This could also be said of the digital computer in 1945, and of the Internet in 1975. Multicast in our periphery will utterly change our world in twenty years.

Dangling String

Let’s return to the dangling string. At first it creates a new center of attention just by being unique. But this center soon becomes peripheral as the gentle waving of the string moves easily to the background. That the string can be both seen and heard helps by increasing the clues for peripheral attunement.

The dangling string increases our peripheral reach to the formerly inaccessible network traffic. While screen displays of traffic are common, their symbols require interpretation and attention, and do not peripheralize well. The string, in part because it is actually in the physical world, has a better impedance match with our brain’s peripheral nerve centers.

In Conclusion

It seems contradictory to say, in the face of frequent complaints about information overload, that more information could be encalming. It seems almost nonsensical to say that the way to become attuned to more information is to attend to it less. It is these apparently bizarre features that may account for why so few designs properly take into account center and periphery to achieve an increased sense of locatedness. But such designs are crucial. Once we are located in a world, the door is opened to social interactions among shared things in that world. As we learn to design calm technology, we will enrich not only our space of artifacts, but also our opportunities for being with other people. Thus may design of calm technology come to play a central role in a more humanly empowered twenty-first century.

Bibliography

Gibson, J. The Ecological Approach to Visual Perception. New York: Houghton Mifflin, 1979.

Norman, D.A. The Psychology of Everyday Things. New York: Basic Books, 1988.

MBone. http://www.best.com/~prince/techinfo/mbone.html 

Brown, J.S. and Duguid, P. Keeping It Simple: Investigating Resources in the Periphery. To appear in Solving the Software Puzzle. Ed. T. Winograd, Stanford University. Spring 1996. 

Weiser, M. The Computer for the Twenty-First Century. Scientific American. September 1991.

Brown, J.S. http://www.startribune.com/digage/seelybro.htm 

Weiser, M. http://www.ubiq.com/weiser


This is an archive of Mark Weiser’s ubiquitous computing website (ubiq.com) which disappeared from the internet in 2018 some time after Mark Weiser passed away. We wanted to preserve Mark Weiser’s knowledge about ubiquitous computing and are permanently hosting a selection of important pages from ubiq.com.

Creator’s Update coming to Windows 10

Microsoft just unveiled a plethora of products and updates to their hardware and software coming in 2017.  One of the most interesting announcements was the Creator’s Update coming to Windows 10 Spring of 2017.  The Creator’s Update features it’s first integration of 3D technology.  Starting with one of those oldest pieces of software from Windows, Paint 3D aims to scan objects from the real world and bring them to life in an all new 3D environment.  The update is largely meant for the new Surface Studio desktop that aims to ease professional artists into these realms. Look to see the Windows 10 Creator’s Update coming to all Windows 10 supported devices for free this Spring.

To read more about Creator’s Update visit the upcoming features sections of Microsoft.com.

Creators Upgrade for Windows 10

Apple Refreshes the MacBook Pro

It has been four years since Apple has overhauled the design of the MacBook Pro and today Apple aims to breathe new life into the series. Designed for developers, the MacBook Pro has always featured top of the line specifications to push the limits of computing of its time. Now thinner, more powerful, and seemingly more innovative than ever before. Apple aims to bring simplistic, easy to use controls at your finger tips. The new OLED display between the keyboard and screen (which used to be function keys) dubbed the Touch Bar utilizes taps and gestures to perform a wide array of tasks. The bar includes Touch ID powered by the company’s very own T1 chip for security. Touch ID can be used to make purchases as easy as a finger press with Apple Pay.

Updated MacBook Pro 2016

One of the biggest changes of the device is it’s inclusion of all new USB C ports. Any slot doubles as a charging port for the device. Long gone is the need to worry about where you are to get a comfortable charging area. These ports can be used for anything. Thunderbolt, USB, HDMI, DisplayPort, VGA, you name it. The downside? No more specific ports. You will need an adapter to get the compatibility with your older devices.

The device launches in many types of configurations and and claims no matter which model you get, it will be twice as fast as it’s previous generation. Learn more about the different specifications and configurations at Apple’s website here. Pre-orders expected to ship in 2-3 weeks.