This is the supplement to David MacDonald's proposed discussion and reorganization of Mobile Accessibility: How WCAG 2.0 and Other W3C/WAI Guidelines Apply to Mobile The purpose of this is reorganization is to:
Note: All proposed new guidelines and Success Criteria are numbered as to where they are proposed in WCAG 2 (that's why their numbers don't have 3.x as per this section)
Lively discussion on list. Here's it is after redundancies etc. removed
Patrick Lauke: "touch" is not strictly a "mobile" issue. There are already many devices (2-in-1 tablet/laptops, desktop machines with external touch-capable monitors, etc) beyond the mobile space which include touch interaction. So, a fundamental question for me would be: would these extensions be signposted/labelled as being "mobile-specific", or will they be added to WCAG 2 core in a more general, device-agnostic manner? Further, though I welcome the addition of SCs relating to touch target size and clearance, I'm wondering why we would not also have the equivalent for mouse or stylus interfaces...again, in short, why make it touch-specific, when in general the SCs should apply to all "pointers" ("mouse cursor, pen, touch (including multi-touch), or other pointing input device", to borrow some wording from the Pointer Events spec http://www.w3.org/TR/pointerevents/)?
Detlev: Hi Patrick, I didn't intend this first draft to be restricted to touch ony devices - just capturing that input mode. It's certainly good to capture input commonalities where they exist (e.g., activate elements on touchend/mouseup)
Patrick: Or, even better, just relying on the high-level focus/blur/click ones (though even for focus/blur, most touch AT don't fire them when you'd expect them - see http://patrickhlauke.github.io/touch/tests/results/#mobile-tablet-touchscreen-assistive-technology-events and particularly http://patrickhlauke.github.io/touch/tests/results/#desktop-touchscreen-assistive-technology-events where none of the tested touchscreen AT trigger a focus when moving to a control)
Jonathan Avila: Regarding touch start and end -- we are thinking of access without AT by people with motor impairments who may tap the wrong control before sliding to locate the correct control. This is new and different than sc 3.2.x. I understand and have seen what you say about focus events and no key events so that is a separate matter to address
Detlev Fischer: - but then there are touch-specific things, not just touch target size as mentioned by Alan, but also
touch gestures without mouse equivalent. Swiping - split-tapping - long presses - rotate gestures - cursed L-shaped gestures, etc.
Patrick: It's probably worth being careful about distinguishing between gestures that the *system / AT* provides, and which are then translated into high-level events (e.g. swiping left/right which a mobile AT will interpret itself and move the focus accordingly) and gestures that are directly handled via JavaScript (with touch and pointer events specific code) - also keeping in mind that the latter can't be done by default when using a touchscreen AT unless the user explicitly triggers some form of gesture passthrough.Detlev: That's a good point. Thinking of the perspective of an AT user carrying out an accessibility test, or even any non-programmer carrying out a heuristic accessibility evaluation using browser toolbars and things like Firebug, I wonder what is implied in making that distinction, and how it might be reflected in documented test procedures.
Are we getting to the point where it becomes impossible to carry out accessibility tests without investigating in detail the chain of events fired?
Patrick: For the former, the fact that the focus is moved sequentially using a swipe left/right rather than a TAB/SHIFT+TAB does not cause any new issues not covered, IMHO, by the existing keyboard-specific SCs if instead of keyboard it talked in more input agnostic terms. Same for not trapping focus etc.Detlev: One important difference being that swiping on mobile also gets to non-focusable elements. While a script may keep keyboard focus safely inside a pop-up window, a SR user may swipe beyond that pop-up unawares (unless the page background has been given the aria-hidden treatment, and that may not work everywhere as intended). Also, it may be easier to reset focus on a touch interface (e.g. 4-finger tap on iOS) compared to getting out of a keyboard trap if a keyboard is all you can use to interact.
Patrick: For the latter, though, I agree that this would be touch (not mobile though) specific...and advice should be given that custom gestures may be difficult/impossible to even trigger for certain users (even for single touch gestures, and even more so for multitouch ones).Detlev: Assuming a non-expert perspective (say, product manager, company stategist), when looking at Principle 2 Operable it would be quite intelligible to talk about
2.1 Keyboard Accessible
2.5 Touch Accessible
2.6 Pointer Accessible (It's not just Windows and Android with KB, Blackberry has a pointer too)
2.7 Voice Accessible
While the input modes touch and pointer share many aspects and (as you show) touch events are actually mapped onto mouse events, there might be enough differences to warrant different Guidelines.
For example, you are right that there is no reason why target size and clearance should not also be defined for pointer input, but the actual values would probably be slightly lower in a "Pointer accessible" Guideline. A pointer is a) more pointed (sigh) and therefore more precise and b) does not obliterate its target in the same way as a finger tip.
Another example: A SC for touch might address multi-touch gestures, mouse has no swipe gesture. SCs under Touch accessible may also cover two input modes: default (direct interaction) and the two-phase indirect interaction of focusing, then activating, when the screenreader is turned on.
Of course it might be more elegant to just make Guideline 2.1 input mode agnostic, but I wonder whether the resulting abstraction would be intelligible to designers and testers. I think it would be worthwhile to take a stab at *just drafting* an input-agnostic Guideline 2.1 "Operable in any mode" and draft SC below, to get a feel what tweaking core WCAG might look like, and how Success criteria and techniques down the line may play out. Interfaces catering for both mouse and touch input often lead to horrible, abject usability. Watch low vision touch users swear at Windows 8 (metro) built-in magnification via indirect input on sidebars (an abomination probably introduced because mice don't know how to pinch-zoom). Watch Narrator users struggle when swipe gestures get too close to the edge and unintendedly reveal the charms bar or those bottom and top slide-in bars in apps. Similar things happen when Blackberry screenreader users unintentionally trigger the common swipes from the edges which BB thought should be retained even with screenreader on. And finally, watch mouse users despair as they cannot locate a close button in a metro view because it is only revealed when they move the mouse right to the top edge of the screen.Mike Pluke: I’d personally prefer something like “character input interface” [instead of keyboard interface] to further break the automatic assumption that we are talking about keyboards or other things with keys on them.
Gregg Vanderheiden: This note is great
- Note 1: A keyboard interface allows users to provide keystroke input to programs even if the native technology does not contain a keyboard.
I would add a note 2
- Note 2: full control form a keyboard interface allows control from any input modality since it is modality agnostic. It can allow control from speech, Morse code, sip and puff, eye gaze, gestures, and augmentative communication aid or any other device or software program that can take (any type) of user input and convert it into keystrokes. Full control from keyboard interface is to input what text is to output. Text can be presented in any sensory modality. Keyboard interface input can be produced by software using any input modality.
RE “character input interface”
- we thought of that but you need more than the characters on the keyboard. You also need arrow keys and return and escape etc.
- we thought of encoded input (but that is greek) and ascii (but that is not international) or UNICODE (but that is undefined and really geeky)
Jonathan: While I agree the term [Keyboard Interface] is misleading, In desktop terms testing with a physical keyboard is one good way to make sure the keyboard interface is working. Even on mobile devices, support a physical keyboard through the keyboard interface is something that helps people with disabilities and is an important test. It just doesn’t go far enough.
David MacDonald: +1 to "it doesn't go far enough." However,
David: How about this for a Guideline under which all the other touch events can be placed?
David: This provides a nice wide guideline under which we can place our Success Criteria and Techniques and it echos the language of the existing Guidelines. (i.e., Guideline 1.4)
Patrick: Sure, but this SC would be relegated into the "touch/mobile" extension to WCAG, which somebody designing a desktop/mouse site may look into (again, going back to the fundamental problem of WCAG extension, but I digress).
David: WCAG 2 is a stable document, entrenched in many jurisdictional laws, which is a good thing. So far, unless something drastically changes in consensus or in the charter approval, the extension model is what we are looking at. However, we may want to explore the idea of incorporating all these recommendations into failure techniques or sufficient techniques for *existing* Success Criteria in WCAG core, which would ensure they get first class treatment in WCAG proper. This would ensure that they are not left out of jurisdictions that didn't add the extension. But some of the placement in existing Success Criteria could be pretty contrived. Most would probably end up in 1.3.1 (like everything else).
David: Is this applicable on all mobile site? See comment above.
Jonathan: But we still need an exception like we have for keyboard access for things like drawing and signatures, etc. So we need to take into timing and paths, etc. Except when the touch interactions requires specific timing or path...Perhaps pulling out similar language that is related to the keyboard success criteria about timing and paths.
2.5.1 Touch: All functionality of the content is operable through a touch interface without requiring specific timings for individual touch gestures, except where the underlying function requires input that depends on the path of the user's movement and not just the endpoints. (Level A)David:I think we are overreaching by requiring EVERYTHING to work with touch. I think we want to stick with requiring that anything that DOES operate via touch, can be used by a variety of users, including those with touch based screen readers. I think we can drop this and be more granular as with the other Success Criteria below.
Gregg: Not posible or practical: an Apple watch – all the physical controls on the side have to also be operable from the screen? Or do you mean that a web page designer needs to provide their own keyboard in their content for any keyboard input on their page? I’m not sure this makes sense. If you are relying on the keyboard interface for input of text – then it is not all via touch – some is via the keyboard interface. And some mobile devices don’t have an onscreen keyboard (they have a physical one) – so all by touch means you AGAIN WOULD have to provide all the input with a keyboard built into each web page or you would fail this SC
Gregg: What good is one dimension?? If you have any physical disability you need to specify both dimensions. ALSO – for what size screen. An apple watch? An iphone 4? All buttons would have to be huge in order to comply on very small screens – and you don’t know what size screen – so you can’t use absolute measures unless you assume smallest screen.
Patrick: - Gregg's question about 9mm - it would be good to clarify if we mean *physical* mm, or CSS mm. Note that many guidelines (such as Google's guidelines for Android, or Microsoft's app design guidelines) use measurements such as dips (device independents pixels) precisely to avoid having to deal with differences in actual physical device dimensions (as it's the device/OS' responsibility to map its actual physical size to a reasonable dips measure, so authors can take that as a given that is reasonably uniform across devices). - on a more general level, I questioned why there should be an SC relating to target size for *touch*, but that there's no equivalent SC for mouse or stylus interaction?
Jon: My guess is that touch target size would need to be larger than a mouse pointer touch area -- so the touch target would catch those as well.
Patrick: Too small a target size can be just as problematic for users with tremors, mobility impairments, reduced dexterity, etc.
Jon That's exactly who this SC is aimed at. This SC is not specifically aimed at screen reader users or low vision users but people with motor impairments.
Patrick: I know it's not the remit of the TF, but I'd argue that this is exactly the sort of thing that would benefit from being a generalised SC applicable to all manner of pointing interaction (mouse, pen, touch, etc). Or is the expectation that there will be a separate TF for "pen and stylus TF", "mouse interaction TF", etc? (these two points also apply to 2.5.5)
Patrick: I like the concept, but the wording that follows (requiring that things only trigger if touch point is still wihin the same element) is overly specific/limiting in my view. Also, it is partly out of the developer's control. For instance, in current iOS and Android, touch events have a magic "auto-capture" behavior: you can start a touch sequence on an element, move your touch point outside of the element, and release it...it will still fire touchmove/touchend events (but not click, granted). Pointer Events include an explicit feature to capture pointers and to emulate the same behavior as touch events. However, it would be possible to make taps/long presses revocable by, for instance, prompting the user with a confirmation dialog as a result of a tap/press (if the action is significant/destructive in particular). This would still fulfill the "revocable" requirement, just in a different way to "must be lifted inside the element". In short: I'd keep the principle of "revocable" actions, but would not pin down the "the finger (touch point, whatever...keeping it a bit more agnostic) is lifted inside the element".
Gregg: This will make interfaces unusable by some people who cannot reliably land and release within the same element. Also it is only a relatively small number that know about this. Also if someone hits something by mistake – they usually don’t have the motor control to use this approach. Better is the ability to reverse or undo. I think that is already in WCAG though – with caveats.
Gregg: You have no control of how it is changed – so how can you be expected to have anything still work?
David MacDonald: How about this:
2.5.4 Touch: For pages and applications that support touch, all functionality of the content is operable through touch gestures with and without system assistive technology activated. (Level A)David: In the understanding document for this SC we would explain that touch gestures with VO on could be and probably would be the VO equivalent to the standard gestures used with VO off.
Patrick: As it's not possible to recognise gestures when VoiceOver is enabled, as VO intercepts gestures for its own purposes (similar to how desktop AT intercept key presses) unless the user explicitly uses a pass-through gesture, does this imply that interfaces need to be made to also work just with an activation/double-tap ? i.e., does double-tap count in this context as a "gesture"? If not, it's not technically possible for web pages to force pass-through (no equivalent to role="application" for desktop/keyboard handling).
David: VO uses gestures for its own purposes and then adds gestures to substitute for those it replaced i.e., VO 3 finger swipe= 1 finger swipe. I'm suggesting that we require everything that can be accomplished with VO off with gestures can be accomplished with VO on.
Patrick: Not completely, though. If I build my own gesture recognition from basic principles (tracking the various touchstart/touchmove/touchend events), the only way that gesture can be passed on to the JS when VO is activated is if the user performs a pass-through gesture, followed by the actual gesture I'm detecting via JS. Technically, this means that yes, even VO users can make any arbitrary gesture detected via JS, but in practice, it's - in my mind - more akin to mouse-keys (in that yes, a keyboard user can nominally use any mouse-specific interface by using mouse keys on their keyboard, just as a touch-AT user can perform any custom gesture...but it's more of a last resort, rather than standard operation). Also, not sure if Android/TalkBack, Windows Mobile/Narrator have these sorts of pass-through gestures (even for iOS/VO, it's badly documented...no mention of it that I could find on any official Apple sites). In short, to me this still makes it lean more towards providing all functionality in other, more traditional ways (which would then also work for mobile/tablet users with an external keyboard/keyboard-like interface). Gestures can be like shortcuts for touch users, but should not replace more traditional buttons/widgets, IMHO. This may be a user setting perhaps? Choose if the interface should just rely on touch gestures, or provide additional focusable/actionable controls?
Jonathan: I also worry that people might try to say that pass through gestures would meet this requirement.
David: How could we fix this concern? I think WCAG 2.1.1 already covers the need for keyboard use (without mouseKeys). We could maybe plug the hole so the pass through gesture is not relied on by the author the same way we do in 2.1.1 not relying on MouseKeys..
Patrick: does this imply that interfaces need to be made to also work just with an activation/double-tap ? i.e., does double-tap count in this context as a "gesture"?
Jonathan: In theory I think this would benefit people from prosthetics too. For example, many apps support zoom by double tapping without requiring a pinch. You should be able to control all actions from touch (e.g. through an API) and also through the keyboard. But I think it would be too constrictive to require on tap, double tap, long tap, etc. Since screen readers and the API support actions through rotors and other gestures it would seem that API based and keyboard access would be sufficient. But you bring up a good point that while this might make sense on native -- but mobile web apps don't have a good way without Indie UI to expose actions to the native assistive technologies. This is a key area that needs to be addressed by other groups and perhaps may be addressed by other options such as WAPA -- but we do need to be careful and perform some research as the abilities we need may not be yet supported or part of a mature enough specification.
David: It would be great to operate everything through taps... even creating a Morse code type of thing, where all gestures could be done with taps for those who can't swipe, but it would require a lot more functionality than is currently available. I think we should park it, and perhaps provide it as a best practise technique under this Success Criteria.
Gregg: do they have a way to map screen readers gestures [to avoid] colliding special gestures in apps? this was not to replace use of gestures — but to provide a simple alternate way to get at them if you can’t make them (physically can’t or can’t because of collisions)
Patrick: Not to my knowledge. iOS does have some form of gesture recording with Assistive Touch, but I can't seem to get it to play ball in combination with VoiceOver, and in the specific case of web content (though this may be my inexperience with this feature). On Android/Win Mobile side, I don't think there's anything comparable, so certainly no cross-platform, cross-AT mechanism.
Jonathan: This is only one aspect of the situation. It’s not so much as colliding gestures rather than a collision of how the touch interface is reconfigured to trap gestures combined with the issue of not being able to see where the gesture is being drawn. For iOS native apps, there is:
- an actions API that allows apps to associate custom actions with an actions rotor or assign a default action to a magic tap gesture
- a pass through gesture –tap and hold and then perform the gestures.
- A trait that can be assigned that will allow direct UI interaction with the element – allowing screen reader users the ability to sign there name, etc.
Take for example a hypothetical knob on a webpage. Without a screen reader I can turn that knob to specific settings. As a developer I can implement keystrokes, let’s say control+1, control+2, etc. for the different settings. I have met the letter of the success criteria by providing a keyboard interface through creating JavaScript shortcut keystroke listeners. In practical reality though as a mobile screen
reader user who does not carry around a keyboard I have no way to trigger those keystrokes.
Patrick: Actually, it gets worse than that. As I noted previously, not all mobile/tablet devices with a paired keyboard actually send keyboard (keydown, keypress) events all the time. In iOS, with a paired keyboard (but no VO enabled), the keyboard is completely inactive except when the user is in a text entry field or similar (basically, it only works in the same situations in which iOS' on-screen keyboard would be triggered). When VO is enabled, the keyboard still only sends keyboard events when in a text entry field etc. In all other situations, every keystroke is intercepted by VO (and again, there is no mechanism to override this with role="application" or similar). In short, for iOS you can't rely on anything that listens for keydown/keypress either. In Android, the situations is more similar to what would happen on desktop (from what I recall at least...would need to do some further testing) in that the keyboard always works/fires key events. Not had a chance to test Windows Mobile with paired keyboard yet, but I suspect it works in a similar way.
David: We never envisioned in the years 2000-2008 when we were tying up WCAG people who are blind using a flat screen to operate a mobile device. I think it was a huge leap forward for our industry, and we need to foster their relationship to their devices, and run with it. Keyboard requirements are in place, they are not going away. Our job now is to look at the gaps, and see if there is anything we can do to ensure these users can continue to use their flat screens which has levelled the playing field for the blind, and to foster authoring that doesn't screw that up.
Here's a rewrite with addressing the concerns.
2.5.4 Touch: For pages and applications that support touch, all functionality of the content is operable through touch gestures with and without system assistive technology activated, without relying on pass through gestures on the system (Level A)Patrick: As said, when touch AT is running, all gestures are intercepted by the AT at the moment (unless you mean taps?). And only iOS, to my knowledge, has a passthrough gesture (which is not announced/exposed to users, so a user would have to guess that if they tried it, something would then happen).
If the intention was to also mean "taps", this is lost on me and possibly the majority of devs, as "gesture" usually implies a swipe, pinch, rotation, etc, which are all intercepted. [ED: skimming towards the end of the document, I see that in 3.3 Touchscreen Gestures "taps" are listed here. This, to me - and I'd argue most other devs - is confusing...I don't normally think of a "tap" as a "gesture"] So this SC (at least the "touch gestures with ... assistive technology activated") part is currently technically *impossible* to satisfy (for anything other than taps), except by not using gestures or by providing alternatives to gestures like actionable buttons.
This can be clarified in the prose for the SC, but perhaps a better way would be to drop the "gestures" word, and then the follow-up about passthrough, leaving a much simpler/clearer:
"2.5.4 Touch: For pages and applications that support touch, all functionality of the content is operable through touch with and without system assistive technology activated (Level A)"
I'm even wondering about the "For pages and applications that support touch" preamble...why have it here? Every other SC relating to touch should then also have it, for consistency? Or perhaps just drop that bit too?
"2.5.4 Touch: All functionality of the content is operable through touch with and without system assistive technology activated (Level A)"
OR is the original intent of this SC to be in fact
"2.5.4 Touch: For pages and applications that support touch *GESTURES*, all functionality of the content is operable through touch gestures with and without system assistive technology activated, without relying on pass through gestures on the system (Level A)"
is this about gestures? In that case, it's definitely technically impossible to satisfy this SC at all currently (see above), so I'd be strongly opposed to it.
Detlev: Maybe it's better to separate the discussion of terminology from the discussion of reworking the mobile TF Doc.
I personally don't get why someone would choose to call swiping or pinching a gesture, but refuse to apply this term to tapping. What about double and triple taps? Taps with two fingers? Long presses? Split taps? To me, it makes sense to call *all* finger actions applied to a touch screen a gesture. I simply don't get why tapping would not count. Where do you draw the line, and why? A related issue is the distinction between touch gestures and button presses. With virtual (non-tactile, but fixed position capacitive) buttons, you already get into a grey area. The drafted Guideline 2.5 "Touch Accessible: All functionality available via touch" probably needs to be expanded to include account for devices with physical (both tactile or capacitive) device buttons. Which would mean something likeGuideline 2.5 OR SC 2.5.4: On devices that support touch input, all functions are available vie touch or button presses also after AT is turned on (i.e. without the use of external keyboards).Detlev: Not well put, but you get the idea.
David: I think when we say Touch, we mean all touch activities such as swipes, taps, gestures etc... anything you do to operate the page by touching it. Regarding gestures, all gestures are intercepted by VoiceOver. But all standard gestures are replaced by VoiceOver, unless the author does something dumb to break that. I think we need to, at a minimum, ensure that standard replacement gestures are not messed up. For instance: I recently tested a high profile app for a major sports event. It had a continuous load feature like twitter that kept populating as you scroll down with one finger. Turn on the VoiceOver and try the 3 finger equivalent of a one finger swipe to do a standard scroll and nothing happens to populate the page. The blind user has hit a brick wall. I think we have to ensure this type of thing doesn't happen on WCAG conforming things.
David: Isn't this the same as 2.5.2 above (9 mm distance)
Gregg: This is essentiall 9x9 target center to center. The same problems as above. 9mm on what mobile device?
Gregg: Advised in an accessible way to all users?
David: Have to fix "should be avoided" or send to advisory
Gregg Comment: Maybe better as a failure of 1.4.4. FAILURE Blocking the zoom feature (pinch zoom or other) without providing some other method for achieving 200% magnification or better
Patrick: Just wondering if the fact that most mobile browsers (Chrome, Firefox, IE, Edge) provide settings to override/force zooming even when a page has disabled it makes any difference here? iOS/Safari is the only mainstream mobile browser which currently does not provide such a setting, granted. But what if that too did?
Gregg: How is this different than “all must be keyboard operable” This says if it is gesture – then it must be gesture and keyboard. So that looks the same as it must be keyboard.
David: It adds "Touch".
Gregg: Ensure is a requirement. Is this always possible?
Gregg: This is not a web content issue but a mobile device issue. Hmmm how about alert? Again – if it can’t always be possible – it shouldn't be an SC. Maybe it is always possible? ???? Home screens?
Patrick: Agree with Gregg this is not a web content issue as currently stated. Also, not every orientation change needs something like an alert to the user...what if nothing actually changes on the page when switching between portrait and landscape - does an AT user need to know that they just rotated the device? Perhaps the intent here is to ensure web content notifies the user if an orientation change had some effect, like a complete change in layout (for instance, a tab navigation in landscape turning into an accordion in portrait; a navigation bar in landscape turning into a button+dropdown in portrait)? If so, this needs rewording, along similar lines to a change in context?
Jon: Yes, that is the intention. For example, if you change from landscape to portrait a set of links disappears and now there is a button menu instead. Or controls disappear or appear depending on the orientation.
Gregg: Good to focus this as technique for WCAG.
Gregg: Good – advisory techniques.
Gregg: Good technique for WCAG Oh this is the same as H2 no? are you just suggesting adding this text to H2. Good idea.
Gregg: Just as true for non-mobile BUT - not testable. What does “sufficiently distinct” mean. Or “Clearly distinguishable” WCAG requires that they be programmatically determined – so users could use AT to make the very visible (much more so than designers would ever permit) But I’m not sure how you can create something testable out of this Make it an ADVISORY TECHNIQUE ???
Gregg: Not sure how these are sufficient by themselves to meet the above. This has to do with making things findable or understandable – not distinguishable.
Gregg: Good advisory technique.
Gregg: How would content know what this was? For example – if a page provided self voicing this might interfere with screen reader on platform. So no page can ever self voice?
Gregg Comment:This looks like a technique for 1.4.4.---- but you should say “to at least 200%” or else it could not be sufficient
Gregg: What is the evidence that this is of value? Not true of many keyboards. Are they all unusable? Also if you define a gap – see notes above on ‘what size screen for that gap?”
David: Rational for not being a Success Criteria: Cannot measure this or apply it in all circumstances. Can we do rework it?
Gregg: It SHOULD be required. But it is already covered by “all functions from keyboard interface” since that would provide an alternate method. So there is already an alternate way to do this. NOTE: again – for some devices –it may not be possible to have something be accessible. A broach that you tap on – and ask questions and it answers in audio – would not be usable by someone who is deaf. They fact that you can’t make it usable – would not be a reason to rewrite the accessibility rules to make it possible for it to pass. It simply would always be inaccessible. Accessibility rules do not say that everything must be accessible to all. They say that if it is reasonable or not an undue burden or some such – then it needs to do x or y or z. Some things are not required to be accessible to some groups. That does not make them accessible – it only means they are not required to be accessible. RE keyboard interface – there may be some IOT devices that do not have remote interfaces – and the iot device itself is too small or limited to be accessible. We don’t rewrite the rules to make it possible for it to pass. We simply say that it is not accessible and it is not possible or reasonable to do so. Most IOT does have a remote interface –so that can be accessible.
Gregg Comment: Quantifying it would be required but since it doesn't apply to many pages which have interactive content all over the page – quantification is not relevant.
Gregg: Agree so advisory technique for WCAG?
Gregg: Can’t be SC because it is prescriptive and lists specific solutions – when others may also apply and be better.
This section is non-normative.
This document provides informative guidance (but does not set requirements) with regard to interpreting and applying Web Content Accessibility Guidelines (WCAG) 2.0 [WCAG20] to web and non-web mobile content and applications.
While the World Wide Web Consortium (W3C)'s W3C Web Accessibility Initiative (WAI) is primarily concerned with web technologies, guidance for web-based technologies is also often relevant to non-web technologies. The W3C-WAI has published the Note Guidance on Applying WCAG 2.0 to Non-Web Information and Communications Technologies (WCAG2ICT) to provide authoritative guidance on how to apply WCAG to non-web technologies such as mobile native applications. The current document is a mobile-specific extension of this effort.
W3C Mobile Web Initiative Recommendations and Notes pertaining to mobile technologies also include the Mobile Web Best Practices and the Mobile Web Application Best Practices. These offer general guidance to developers on how to create content and applications that work well on mobile devices. The current document is focused on the accessibility of mobile web and applications to people with disabilities and is not intended to supplant any other W3C work.
"Mobile" is a generic term for a broad range of wireless devices and applications that are easy to carry and use in a wide variety of settings, including outdoors. Mobile devices range from small handheld devices (e.g. feature phones, smartphones) to somewhat larger tablet devices. The term also applies to "wearables" such as "smart"-glasses, "smart"-watches and fitness bands, and is relevant to other small computing devices such as those embedded into car dashboards, airplane seatbacks, and household appliances.
While mobile is viewed by some as separate from "desktop/laptop", and thus perhaps requiring new and different accessibility guidance, in reality there is no absolute divide between the categories. For example:
Furthermore, the vast majority of user interface patterns from desktop/laptop systems (e.g. text, hyperlinks, tables, buttons, pop-up menus, etc.) are equally applicable to mobile. Therefore, it's not surprising that a large number of existing WCAG 2.0 techniques can be applied to mobile content and applications (see Appendix A). Overall, WCAG 2.0 is highly relevant to both web and non-web mobile content and applications.
That said, mobile devices do present a mix of accessibility issues that are different from the typical desktop/laptop. The "Discussion of Mobile-Related Issues" section, below, explains how these issues can be addressed in the context of WCAG 2.0 as it exists or with additional best practices. All the advice in this document can be applied to mobile web sites, mobile web applications, and hybrid web-native applications. Most of the advice also applies to native applications (also known as "mobile apps").
Note: WCAG 2.0 does not provide testable success criteria for some of the mobile-related issues. The work of the Mobile Accessibility Task Force has been to develop techniques and best practices in these areas. When the techniques or best practices don't map to specific WCAG success criteria, they aren't given a sufficient, advisory or failure designation. This doesn't mean that they are optional for creating accessible web content on a mobile platform, but rather that they cannot currently be assigned a designation. The Task Force anticipates that some of these techniques will be included as sufficient or advisory in a potential future iteration of WCAG.
The current document references existing WCAG 2.0 Techniques that apply to mobile platform (see Appendix A) and provides new best practices, which may in the future become WCAG 2.0 Techniques that directly address emerging mobile accessibility challenges such as small screens, touch and gesture interface, and changing screen orientation.