David Macdonald Summary Comment: I think this discussion demonstrates that requiring ALL functionality to work with touch will be difficult. I don't think that we should require that "all functionality" be available via touch because:
  1. It may not be possible.
  2. It may not apply on ALL cases to ALL mobile sites.
  3. I think we have to assume that normal usabililty practices will ensure that mobile apps will be primarily touch functioning.
However, the accessibility gap is that deveopers don't ensure that someone running assistive technology can ALSO operate the system with touch. This is 2.5.4 below.

David: How about this for a Guideline under which all the other touch events can be placed?

Guideline 2.5 Touch Accessible: Make it easier for users to operate touch functionality (Understanding)

David: This provides a nice wide guideline under which we can place our Success Criteria and Techniques and it echos the language of the existing Guidelines. (i.e., Guideline 1.4)

Patrick: Sure, but this SC would be relegated into the "touch/mobile" extension to WCAG, which somebody designing a desktop/mouse site may look into (again, going back to the fundamental problem of WCAG extension, but I digress).

David: WCAG 2 is a stable document, entrenched in many jurisdictional laws, which is a good thing. So far, unless something drastically changes in consensus or in the charter approval, the extension model is what we are looking at. However, we may want to explore the idea of incorporating all these recommendations into failure techniques or sufficient techniques for *existing* Success Criteria in WCAG core, which would ensure they get first class treatment in WCAG proper. This would ensure that they are not left out of jurisdictions that didn't add the extension. But some of the placement in existing Success Criteria could be pretty contrived. Most would probably end up in 1.3.1 (like everything else).

David Summary:I think it is worth weighing hard the pros and cons of rolling these into WCAG core vs. adding Success Criteria and Guidelines in this extension.

New Proposed Success Criteria under this proposed Guideline

2.5.1 Touch: All functionality of the content is operable through touch gestures. (Level A)

David: Is this applicable on all mobile site? See comment above.

Jonathan: But we still need an exception like we have for keyboard access for things like drawing and signatures, etc.  So we need to take into timing and paths, etc.  Except when the touch interactions requires specific timing or path...Perhaps pulling out similar language that is related to the keyboard success criteria about timing and paths.

2.5.1 Touch: All functionality of the content is operable through a touch interface without requiring specific timings for individual touch gestures, except where the underlying function requires input that depends on the path of the user's movement and not just the endpoints. (Level A)

David:I think we are overreaching by requiring EVERYTHING to work with touch. I think we want to stick with requiring that anything that DOES operate via touch, can be used by a variety of users, including those with touch based screen readers. I think we can drop this and be more granular as with the other Success Criteria below.

Gregg: Not posible or practical: an Apple watch – all the physical controls on the side have to also be operable from the screen? Or do you mean that a web page designer needs to provide their own keyboard in their content for any keyboard input on their page? I’m not sure this makes sense. If you are relying on the keyboard interface for input of text – then it is not all via touch – some is via the keyboard interface. And some mobile devices don’t have an onscreen keyboard (they have a physical one) – so all by touch means you AGAIN WOULD have to provide all the input with a keyboard built into each web page or you would fail this SC

2.5.2 Touch Target Size: One dimension of any touch target measures at least 9 mm except when the user has reduced the default scale of content. (Level AA)

Gregg: What good is one dimension?? If you have any physical disability you need to specify both dimensions. ALSO – for what size screen. An apple watch? An iphone 4? All buttons would have to be huge in order to comply on very small screens – and you don’t know what size screen – so you can’t use absolute measures unless you assume smallest screen.

Patrick: - Gregg's question about 9mm - it would be good to clarify if we mean *physical* mm, or CSS mm. Note that many guidelines (such as Google's guidelines for Android, or Microsoft's app design guidelines) use measurements such as dips (device independents pixels) precisely to avoid having to deal with differences in actual physical device dimensions (as it's the device/OS' responsibility to map its actual physical size to a reasonable dips measure, so authors can take that as a given that is reasonably uniform across devices). - on a more general level, I questioned why there should be an SC relating to target size for *touch*, but that there's no equivalent SC for mouse or stylus interaction?

Jon: My guess is that touch target size would need to be larger than a mouse pointer touch area -- so the touch target would catch those as well.

Patrick: Too small a target size can be just as problematic for users with tremors, mobility impairments, reduced dexterity, etc.

Jon That's exactly who this SC is aimed at.  This SC is not specifically aimed at screen reader users or low vision users but people with motor impairments.

Patrick: I know it's not the remit of the TF, but I'd argue that this is exactly the sort of thing that would benefit from being a generalised SC applicable to all manner of pointing interaction (mouse, pen, touch, etc). Or is the expectation that there will be a separate TF for "pen and stylus TF", "mouse interaction TF", etc? (these two points also apply to 2.5.5)

2.5.3 Single Taps and Long Presses Revocable: Interface elements that require a single tap or a long press as input will only trigger the corresponding event when the finger is lifted inside that element. (Level A)

Patrick: I like the concept, but the wording that follows (requiring that things only trigger if touch point is still wihin the same element) is overly specific/limiting in my view. Also, it is partly out of the developer's control. For instance, in current iOS and Android, touch events have a magic "auto-capture" behavior: you can start a touch sequence on an element, move your touch point outside of the element, and release it...it will still fire touchmove/touchend events (but not click, granted). Pointer Events include an explicit feature to capture pointers and to emulate the same behavior as touch events. However, it would be possible to make taps/long presses revocable by, for instance, prompting the user with a confirmation dialog as a result of a tap/press (if the action is significant/destructive in particular). This would still fulfill the "revocable" requirement, just in a different way to "must be lifted inside the element". In short: I'd keep the principle of "revocable" actions, but would not pin down the "the finger (touch point, whatever...keeping it a bit more agnostic) is lifted inside the element".

Gregg: This will make interfaces unusable by some people who cannot reliably land and release within the same element. Also it is only a relatively small number that know about this. Also if someone hits something by mistake – they usually don’t have the motor control to use this approach. Better is the ability to reverse or undo. I think that is already in WCAG though – with caveats.

2.5.4 Modified Touch: When touch input behavior is modified by built-in assistive technology, all functionality of the content is still operable through touch gestures. (Level A)

Gregg: You have no control of how it is changed – so how can you be expected to have anything still work?

David MacDonald: How about this:

2.5.4 Touch: For pages and applications that support touch, all functionality of the content is operable through touch gestures with and without system assistive technology activated. (Level A)

David: In the understanding document for this SC we would explain that touch gestures with VO on could be and probably would be the VO equivalent to the standard gestures used with VO off.

Patrick: As it's not possible to recognise gestures when VoiceOver is enabled, as VO intercepts gestures for its own purposes (similar to how desktop AT intercept key presses) unless the user explicitly uses a pass-through gesture, does this imply that interfaces need to be made to also work just with an activation/double-tap ? i.e., does double-tap count in this context as a "gesture"? If not, it's not technically possible for web pages to force pass-through (no equivalent to role="application" for desktop/keyboard handling).

David: VO uses gestures for its own purposes and then adds gestures to substitute for those it replaced i.e., VO 3 finger swipe= 1 finger swipe. I'm suggesting that we require everything that can be accomplished with VO off with gestures can be accomplished with VO on.

Patrick: Not completely, though. If I build my own gesture recognition from basic principles (tracking the various touchstart/touchmove/touchend events), the only way that gesture can be passed on to the JS when VO is activated is if the user performs a pass-through gesture, followed by the actual gesture I'm detecting via JS. Technically, this means that yes, even VO users can make any arbitrary gesture detected via JS, but in practice, it's - in my mind - more akin to mouse-keys (in that yes, a keyboard user can nominally use any mouse-specific interface by using mouse keys on their keyboard, just as a touch-AT user can perform any custom gesture...but it's more of a last resort, rather than standard operation). Also, not sure if Android/TalkBack, Windows Mobile/Narrator have these sorts of pass-through gestures (even for iOS/VO, it's badly documented...no mention of it that I could find on any official Apple sites). In short, to me this still makes it lean more towards providing all functionality in other, more traditional ways (which would then also work for mobile/tablet users with an external keyboard/keyboard-like interface). Gestures can be like shortcuts for touch users, but should not replace more traditional buttons/widgets, IMHO. This may be a user setting perhaps? Choose if the interface should just rely on touch gestures, or provide additional focusable/actionable controls?

Jonathan: I also worry that people might try to say that pass through gestures would meet this requirement.

David: How could we fix this concern? I think WCAG 2.1.1 already covers the need for keyboard use (without mouseKeys). We could maybe plug the hole so the pass through gesture is not relied on by the author the same way we do in 2.1.1 not relying on MouseKeys..

Patrick: does this imply that interfaces need to be made to also work just with an activation/double-tap ? i.e., does double-tap count in this context as a "gesture"?

Jonathan: In theory I think this would benefit people from prosthetics too.  For example, many apps support zoom by double tapping without requiring a pinch.  You should be able to control all actions from touch (e.g. through an API) and also through the keyboard.  But I think it would be too constrictive to require on tap, double tap, long tap, etc.  Since screen readers and the API support actions through rotors and other gestures it would seem that API based and keyboard access would be sufficient.  But you bring up a good point that while this might make sense on native -- but mobile web apps don't have a good way without Indie UI to expose actions to the native assistive technologies.  This is a key area that needs to be addressed by other groups and perhaps may be addressed by other options such as WAPA -- but we do need to be careful and perform some research as the abilities we need may not be yet supported or part of a mature enough specification.

David: It would be great to operate everything through taps... even creating a Morse code type of thing, where all gestures could be done with taps for those who can't swipe, but it would require a lot more functionality than is currently available.  I think we should park it, and perhaps provide it as a best practise technique under this Success Criteria.

Gregg: do they have a way to map screen readers gestures [to avoid] colliding special gestures in apps? this was not to replace use of gestures — but to provide a simple alternate way to get at them if you can’t make them (physically can’t or can’t because of collisions) 

Patrick: Not to my knowledge. iOS does have some form of gesture recording with Assistive Touch, but I can't seem to get it to play ball in combination with VoiceOver, and in the specific case of web content (though this may be my inexperience with this feature). On Android/Win Mobile side, I don't think there's anything comparable, so certainly no cross-platform, cross-AT mechanism.

Jonathan: This is only one aspect of the situation.    It’s not so much as colliding gestures rather than a collision of how the touch interface is reconfigured to trap gestures combined with the issue of not being able to see where the gesture is being drawn. For iOS native apps, there is:

Take for example a hypothetical knob on a webpage.  Without a screen reader I can turn that knob to specific settings.  As a developer I can implement keystrokes, let’s say control+1, control+2, etc. for the different settings.  I have met the letter of the success criteria by providing a keyboard interface through creating JavaScript shortcut keystroke listeners.  In practical reality though as a mobile screen
reader user who does not carry around a keyboard I have no way to trigger those keystrokes.

Patrick: Actually, it gets worse than that. As I noted previously, not all mobile/tablet devices with a paired keyboard actually send keyboard (keydown, keypress) events all the time. In iOS, with a paired keyboard (but no VO enabled), the keyboard is completely inactive except when the user is in a text entry field or similar (basically, it only works in the same situations in which iOS' on-screen keyboard would be triggered). When VO is enabled, the keyboard still only sends keyboard events when in a text entry field etc. In all other situations, every keystroke is intercepted by VO (and again, there is no mechanism to override this with role="application" or similar). In short, for iOS you can't rely on anything that listens for keydown/keypress either. In Android, the situations is more similar to what would happen on desktop (from what I recall at least...would need to do some further testing) in that the keyboard always works/fires key events. Not had a chance to test Windows Mobile with paired keyboard yet, but I suspect it works in a similar way.

David: We never envisioned in the years 2000-2008 when we were tying up WCAG people who are blind using a flat screen to operate a mobile device. I think it was a huge leap forward for our industry, and we need to foster their relationship to their devices, and run with it. Keyboard requirements are in place, they are not going away. Our job now is to look at the gaps, and see if there is anything we can do to ensure these users can continue to use their flat screens which has levelled the playing field for the blind, and to foster authoring that doesn't screw that up. 

Here's a rewrite with addressing the concerns.

2.5.4 Touch: For pages and applications that support touch, all functionality of the content is operable through touch gestures with and without system assistive technology activated, without relying on pass through gestures on the system (Level A)

Patrick: As said, when touch AT is running, all gestures are intercepted by the AT at the moment (unless you mean taps?). And only iOS, to my knowledge, has a passthrough gesture (which is not announced/exposed to users, so a user would have to guess that if they tried it, something would then happen).
If the intention was to also mean "taps", this is lost on me and possibly the majority of devs, as "gesture" usually implies a swipe, pinch, rotation, etc, which are all intercepted. [ED: skimming towards the end of the document, I see that in 3.3 Touchscreen Gestures "taps" are listed here. This, to me - and I'd argue most other devs - is confusing...I don't normally think of a "tap" as a "gesture"] So this SC (at least the "touch gestures with ... assistive technology activated") part is currently technically *impossible* to satisfy (for anything other than taps), except by not using gestures or by providing alternatives to gestures like actionable buttons.
This can be clarified in the prose for the SC, but perhaps a better way would be to drop the "gestures" word, and then the follow-up about passthrough, leaving a much simpler/clearer:

"2.5.4 Touch: For pages and applications that support touch, all functionality of the content is operable through touch with and without system assistive technology activated (Level A)"

I'm even wondering about the "For pages and applications that support touch" preamble...why have it here? Every other SC relating to touch should then also have it, for consistency? Or perhaps just drop that bit too?

"2.5.4 Touch: All functionality of the content is operable through touch with and without system assistive technology activated (Level A)"

OR is the original intent of this SC to be in fact

"2.5.4 Touch: For pages and applications that support touch *GESTURES*, all functionality of the content is operable through touch gestures with and without system assistive technology activated, without relying on pass through gestures on the system (Level A)"

is this about gestures? In that case, it's definitely technically impossible to satisfy this SC at all currently (see above), so I'd be strongly opposed to it.

Detlev: Maybe it's better to separate the discussion of terminology from the discussion of reworking the mobile TF Doc.
I personally don't get why someone would choose to call swiping or pinching a gesture, but refuse to apply this term to tapping. What about double and triple taps? Taps with two fingers? Long presses? Split taps? To me, it makes sense to call *all* finger actions applied to a touch screen a gesture. I simply don't get why tapping would not count. Where do you draw the line, and why? A related issue is the distinction between touch gestures and button presses. With virtual (non-tactile, but fixed position capacitive) buttons, you already get into a grey area. The drafted Guideline 2.5 "Touch Accessible: All functionality available via touch" probably needs to be expanded to include account for devices with physical (both tactile or capacitive) device buttons. Which would mean something like

Guideline 2.5 OR SC 2.5.4: On devices that support touch input, all functions are available vie touch or button presses also after AT is turned on (i.e. without the use of external keyboards).  

Detlev: Not well put, but you get the idea.

David: I think when we say Touch, we mean all touch activities such as swipes, taps, gestures etc... anything you do to operate the page by touching it. Regarding gestures, all gestures are intercepted by VoiceOver. But all standard gestures are replaced by VoiceOver, unless the author does something dumb to break that. I think we need to, at a minimum, ensure that standard replacement gestures are not messed up. For instance: I recently tested a high profile app for a major sports event. It had a continuous load feature like twitter that kept populating as you scroll down with one finger. Turn on the VoiceOver and try the 3 finger equivalent of a one finger swipe to do a standard scroll and nothing happens to populate the page. The blind user has hit a brick wall. I think we have to ensure this type of thing doesn't happen on WCAG conforming things.


2.5.5 Touch Target Clearance: The center of each touch target has a distance of at least 9 mm from the center of any other touch target, except when the user has reduced the default scale of content. (Level AA)

David: Isn't this the same as 2.5.2 above (9 mm distance)

Gregg: This is essentiall 9x9 target center to center. The same problems as above. 9mm on what mobile device?

2.5.6 No Swipe Trap: When touch input behavior is modified by built-in assistive technology so that touch focus can be moved to a component of the page using swipe gestures, then focus can be moved away from that component using swipe gestures or the user is advised of the method for moving focus away. (Level A)

Gregg: Advised in an accessible way to all users?

2.5.7 Pinch Zoom: Browser pinch zoom is not blocked by the page's viewport meta element so that it can be used to zoom the page to 200%. Restrictive values for user-scalable and maximum-scale attributes of this meta element should be avoided.

David: Have to fix "should be avoided" or send to advisory

Gregg Comment: Maybe  better as a failure of 1.4.4.  FAILURE Blocking the zoom feature (pinch zoom or other) without providing some other method for achieving 200% magnification or better

Patrick: Just wondering if the fact that most mobile browsers (Chrome, Firefox, IE, Edge) provide settings to override/force zooming even when a page has disabled it makes any difference here? iOS/Safari is the only mainstream mobile browser which currently does not provide such a setting, granted. But what if that too did?

2.5.8 Device manipulation: When device manipulation gestures are provided, touch and keyboard operable alternative control options are available.

Gregg: How is this different than “all must be keyboard operable” This says if it is gesture – then it must be gesture and keyboard. So that looks the same as it must be keyboard.

David: It adds "Touch".

New Possible Guideline Changing Screen Orientation (Portrait/Landscape)

3.4 Flexible Orientation: Ensure users can use the content in the orientation that suits their circumstances

Gregg: Ensure is a requirement. Is this always possible?

Possible New Success Criteria

3.4.1 Expose Orientation: Changes in orientation are programmatically exposed to ensure detection by assistive technology such as screen readers.

Gregg: This is not a web content issue but a mobile device issue. Hmmm how about alert? Again – if it can’t always be possible – it shouldn't be an SC. Maybe it is always possible? ???? Home screens?  

Patrick: Agree with Gregg this is not a web content issue as currently stated. Also, not every orientation change needs something like an alert to the user...what if nothing actually changes on the page when switching between portrait and landscape - does an AT user need to know that they just rotated the device? Perhaps the intent here is to ensure web content notifies the user if an orientation change had some effect, like a complete change in layout (for instance, a tab navigation in landscape turning into an accordion in portrait; a navigation bar in landscape turning into a button+dropdown in portrait)? If so, this needs rewording, along similar lines to a change in context?

Jon: Yes, that is the intention.  For example, if you change from landscape to portrait a set of links disappears and now there is a button menu instead.  Or controls disappear or appear depending on the orientation.

New Possible techniques for Success Criteria 3.2.3

If the navigation bar is collapsed into a single icon, the entries in the drop-down list that appear when activating the icon are still in the same relative order as the full navigation menu.

Gregg: Good to focus this as technique for WCAG.

A Web site, when viewed on the different screen sizes and in different orientations, has some components that are hidden or appear in a different order. The components that show, however, remain consistent for any screen size and orientation.

New Techniques for 3.3.2 Labels or Instructions

Therefore, instructions (e.g. overlays, tooltips, tutorials, etc.) should be provided to explain what gestures can be used to control a given interface and whether there are alternatives.

Gregg: Good – advisory techniques.

Advisory Technique for Grouping operable elements that perform the same action (4.4 in mobile doc)

When multiple elements perform the same action or go to the same destination (e.g. link icon with link text), these should be contained within the same actionable element. This increases the touch target size for all users and benefits people with dexterity impairments. It also reduces the number of redundant focus targets, which benefits people using screen readers and keyboard/switch control.

Gregg: Good technique for WCAG Oh this is the same as H2 no? are you just suggesting adding this text to H2. Good idea.

4.5 Provide clear indication that elements are actionable

New Guideline

1.6 Make interactive elements distinguishable

New Success Criteria

1.6.1 Triggers Distinguishable: Elements that trigger changes should be sufficiently distinct to be clearly distinguishable from non-actionable elements (content, status information, etc).

Gregg: Just as true for non-mobile BUT - not testable. What does “sufficiently distinct” mean. Or “Clearly distinguishable” WCAG requires that they be programmatically determined – so users could use AT to make the very visible (much more so than designers would ever permit) But I’m not sure how you can create something testable out of this Make it an ADVISORY TECHNIQUE ???

New Sufficient Techniques for 1.6.1

Conventional Shape: Button shape (rounded corners, drop shadows), checkbox, select rectangle with arrow pointing downwards
Iconography: conventional visual icons (question mark, home icon, burger icon for menu, floppy disk for save, back arrow, etc)
Color Offset: shape with different background color to distinguish the element from the page background, different text color
Conventional Style: Underlined text for links, color for links
Conventional positioning: Commonly used position such as a top left position for back button (iOS), position of menu items within left-aligned lists in drop-down menus for navigation

Gregg: Not sure how these are sufficient by themselves to meet the above. This has to do with making things findable or understandable – not distinguishable.

Set the virtual keyboard to the type of data entry required 5.1

New technique under 1.3.1 Info and Relationships

Data Mask: Set the virtual keyboard to the type of data entry required

Gregg: Good advisory technique.

New Success Criteria under 4.1

4.1.4 Non-interference of AT: Content does not interfere with default functionality of platform level assistive technology

Gregg: How would content know what this was? For example – if a page provided self voicing this might interfere with screen reader on platform. So no page can ever self voice?

Advisory techniques: 2.2 Zoom/Magnification

Advisory techniques: 2.2 Zoom/Magnification

Support for system fonts that follow platform level user preferences for text size.

(Rational for not being sufficient technique: can this be done?)

Gregg Comment:This looks like a technique for 1.4.4.---- but you should say “to at least 200%”   or else it could not be sufficient

Provide on-page controls to change the text size.

(Rational for not being sufficient technique: best practice but usually not big enough, redundant with other zooming, extra work)

Advisory techniques: Contrast (2.3)

The default point size for mobile platforms might be larger than the default point size used on non-mobile devices. When determining which contrast ratio to follow, developers should strive to make sure to apply the lessened contrast ratio only when text is roughly equivalent to 1.2 times bold or 1.5 times (120% bold or 150%) that of the default platform size.

(Rational for not being SC: "roughly equivalent" is not testable. Can we settle on something determinable and testable?
Gregg Comment: How does an author know that someone will be viewing their content on a mobile device? Or what size mobile device?   A table vs an iphone 4 I mega different. Not sure how you can make an SC out of this.

Advisory Techniques for 3.2 Touch Target Size and Spacing

Ensuring that touch targets close to the minimum size are surrounded by a small amount of inactive space.

Rational for not being a Success Criteria: Cannot measure "Small amount". Can we quantify it?

Gregg: What is the evidence that this is of value? Not true of many keyboards. Are they all unusable? Also if you define a gap – see notes above on ‘what size screen for that gap?”

Advisory Techniques for touchscreen gestures

Gestures in apps should be as easy as possible to carry out.

Rational for not being a Success Criteria: Cannot measure "easy as possible". Can we do rework it?

Some (but not all) mobile operating systems provide work-around features that let the user simulate complex gestures with simpler ones using an onscreen menu.

David: Rational for not being a Success Criteria: Cannot measure this or apply it in all circumstances. Can we do rework it?

Gregg: It SHOULD be required. But it is already covered by “all functions from keyboard interface” since that would provide an alternate method. So there is already an alternate way to do this. NOTE: again – for some devices –it may not be possible to have something be accessible. A broach that you tap on – and ask questions and it answers in audio – would not be usable by someone who is deaf. They fact that you can’t make it usable – would not be a reason to rewrite the accessibility rules to make it possible for it to pass. It simply would always be inaccessible. Accessibility rules do not say that everything must be accessible to all. They say that if it is reasonable or not an undue burden or some such – then it needs to do x or y or z. Some things are not required to be accessible to some groups. That does not make them accessible – it only means they are not required to be accessible. RE keyboard interface – there may be some IOT devices that do not have remote interfaces – and the iot device itself is too small or limited to be accessible. We don’t rewrite the rules to make it possible for it to pass. We simply say that it is not accessible and it is not possible or reasonable to do so. Most IOT does have a remote interface –so that can be accessible.

Usually, design alternatives exist to allow changes to settings via simple tap or swipe gestures.

Rational for not being a Success Criteria: Cannot measure this or apply it in all circumstances. Can we do rework it?

Advisiory technique for Device manipulation Gestures

Some (but not all) mobile operating systems provide work-around features that let the user simulate device shakes, tilts, etc. from an onscreen menu.

Rational for it not being a Success Criteria: It doesn't apply to all situations. Can we quantify it?

Advisiory technique placing buttons where they are easy to access consistent layout

Developers should also consider that an easy-to-use button placement for some users might cause difficulties for others (e.g. left- vs. right-handed use, assumptions about thumb range of motion). Therefore, flexible use should always be the goal.

Rational for it not being a Success Criteria: It doesn't apply to all situations. Can we quantify it?

Gregg Comment: Quantifying it would be required but since it doesn't apply to many pages which have interactive content all over the page – quantification is not relevant.

Advisory technique for Positioning important page elements before the page scroll 4.3

Positioning important page information so it is visible without requiring scrolling can assist users with low vision and users with cognitive impairments.

Rational for it not being a Success Criteria: It doesn't apply to all situations. Can we quantify it?

Gregg: Agree so advisory technique for WCAG?

Advisory technique Provide easy methods for data entry 5.2

Reduce the amount of text entry needed by providing select menus, radio buttons, check boxes or by automatically entering known information (e.g. date, time, location).

Rational for it not being a Success Criteria: It doesn't apply to all situations. Can we quantify it?

Gregg: Can’t be SC because it is prescriptive and lists specific solutions – when others may also apply and be better.


Other ideas to consider


 

Understanding Mobile Document

1. Introduction

This section is non-normative.

This document provides informative guidance (but does not set requirements) with regard to interpreting and applying Web Content Accessibility Guidelines (WCAG) 2.0 [WCAG20] to web and non-web mobile content and applications.

While the World Wide Web Consortium (W3C)'s W3C Web Accessibility Initiative (WAI) is primarily concerned with web technologies, guidance for web-based technologies is also often relevant to non-web technologies. The W3C-WAI has published the Note Guidance on Applying WCAG 2.0 to Non-Web Information and Communications Technologies (WCAG2ICT) to provide authoritative guidance on how to apply WCAG to non-web technologies such as mobile native applications. The current document is a mobile-specific extension of this effort.

W3C Mobile Web Initiative Recommendations and Notes pertaining to mobile technologies also include the Mobile Web Best Practices and the Mobile Web Application Best Practices. These offer general guidance to developers on how to create content and applications that work well on mobile devices. The current document is focused on the accessibility of mobile web and applications to people with disabilities and is not intended to supplant any other W3C work.

1.1 WCAG 2.0 and Mobile Content/Applications

"Mobile" is a generic term for a broad range of wireless devices and applications that are easy to carry and use in a wide variety of settings, including outdoors. Mobile devices range from small handheld devices (e.g. feature phones, smartphones) to somewhat larger tablet devices. The term also applies to "wearables" such as "smart"-glasses, "smart"-watches and fitness bands, and is relevant to other small computing devices such as those embedded into car dashboards, airplane seatbacks, and household appliances.

While mobile is viewed by some as separate from "desktop/laptop", and thus perhaps requiring new and different accessibility guidance, in reality there is no absolute divide between the categories. For example:

Furthermore, the vast majority of user interface patterns from desktop/laptop systems (e.g. text, hyperlinks, tables, buttons, pop-up menus, etc.) are equally applicable to mobile. Therefore, it's not surprising that a large number of existing WCAG 2.0 techniques can be applied to mobile content and applications (see Appendix A). Overall, WCAG 2.0 is highly relevant to both web and non-web mobile content and applications.

That said, mobile devices do present a mix of accessibility issues that are different from the typical desktop/laptop. The "Discussion of Mobile-Related Issues" section, below, explains how these issues can be addressed in the context of WCAG 2.0 as it exists or with additional best practices. All the advice in this document can be applied to mobile web sites, mobile web applications, and hybrid web-native applications. Most of the advice also applies to native applications (also known as "mobile apps").

Note: WCAG 2.0 does not provide testable success criteria for some of the mobile-related issues. The work of the Mobile Accessibility Task Force has been to develop techniques and best practices in these areas. When the techniques or best practices don't map to specific WCAG success criteria, they aren't given a sufficient, advisory or failure designation. This doesn't mean that they are optional for creating accessible web content on a mobile platform, but rather that they cannot currently be assigned a designation. The Task Force anticipates that some of these techniques will be included as sufficient or advisory in a potential future iteration of WCAG.

The current document references existing WCAG 2.0 Techniques that apply to mobile platform (see Appendix A) and provides new best practices, which may in the future become WCAG 2.0 Techniques that directly address emerging mobile accessibility challenges such as small screens, touch and gesture interface, and changing screen orientation.