WO2012098361A1 - Apparatus and method for improved user interaction in electronic devices - Google Patents

Apparatus and method for improved user interaction in electronic devices Download PDF

Info

Publication number
WO2012098361A1
WO2012098361A1 PCT/GB2012/000054 GB2012000054W WO2012098361A1 WO 2012098361 A1 WO2012098361 A1 WO 2012098361A1 GB 2012000054 W GB2012000054 W GB 2012000054W WO 2012098361 A1 WO2012098361 A1 WO 2012098361A1
Authority
WO
WIPO (PCT)
Prior art keywords
region
gesture
touch sensitive
gesture input
boundary
Prior art date
Application number
PCT/GB2012/000054
Other languages
French (fr)
Inventor
Kenneth Forbes JOHNSTONE
Tim Russell
Tsi Sheen YAP
Kevin David Joyce
Michael David Smith
Nicola EGER
Gavin Edmonds
Original Assignee
Inq Enterprises Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inq Enterprises Limited filed Critical Inq Enterprises Limited
Publication of WO2012098361A1 publication Critical patent/WO2012098361A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Definitions

  • the present invention relates to apparatus, methods and computer programs providing improved user interaction with portable electronic devices.
  • touch sensitive areas that accept user input and this provides ease of use by enabling intuitive user interaction with the device.
  • Touch-sensitive areas are often implemented to allow users to select an item by finger touch or other similar input such as a stylus, and can initiate an operation by gestures, avoiding the need for a separate data input apparatus such as a mouse or trackpad.
  • Various technologies have been used to implement touch-sensitivity, including capacitive touch sensors which measure a change in capacitance resulting from the effect of touching the screen, resistive touch sensitive areas that measure a change in electrical current resulting from pressure on the touch sensitive areas reducing the gap between conductive layers, and other technologies.
  • touch sensitive areas on a phone display screen which displays graphical user interface objects such as virtual buttons that may change depending on the application that is running on the phone.
  • the functionality associated with the virtual buttons can be invoked through direct manipulation of the buttons by performing a certain type of gesture on the location of the touch sensitive area where the virtual button is located.
  • One type of gesture is a user tapping the location of the virtual button on the screen thereby activating the virtual button.
  • many current touch sensitive areas are capable of accepting and recognizing particular gestures in addition to a tap gesture.
  • WO 2009137419 relates to a touch-sensitive display screen that is enhanced by a touch-sensitive control area that extends beyond the edges of the display screen.
  • the touch-sensitive area outside the display screen referred to as a "gesture area”
  • the present invention provides an electronic device for receiving gesture input, the device comprising: a first touch sensitive region for receiving user gesture input and a second touch sensitive region for receiving user gesture input; a processor for interpreting the user gesture input; a boundary region located between the first touch sensitive region and second touch sensitive region, wherein if a start of the gesture input is inside the first or second touch sensitive region and the completion of the gesture input is inside the boundary region, the processor is adapted to interpret that the gesture has been completed in the first or second touch sensitive region from which the gesture entered the boundary region, and if the start of the gesture input is inside the boundary region and the completion of the gesture input is inside the first or second touch sensitive region, the processor is adapted to interpret that the gesture input has started in the first or second touch sensitive region in which the gesture enters after exiting the boundary region.
  • the processor is adapted to interpret that the gesture started in the first touch sensitive region and completed in the second touch sensitive region, and if the start of the gesture input inside the second touch sensitive region, crosses the boundary region, and is completed in the first touch sensitive region, the processor is adapted to interpret that the gesture started in the second touch sensitive region and completed in the first touch sensitive region.
  • the first touch sensitive region may have a different function to the second touch sensitive region.
  • the boundary region may be a variable number of pixels and based on the gesture input to the first or second touch sensitive region.
  • the boundary region has a height that is a predetermined number of pixels.
  • the boundary region may have a first boundary threshold and a second boundary threshold, and the processor can be adapted to determine where the gesture input is to be regarded as occurring on the basis of which boundary threshold is crossed by the gesture input.
  • the first touch sensitive region is a display screen.
  • the second touch sensitive region may be adapted to receive gesture inputs which are interpreted by the processor to carry out commands on the device that are different to if the same gesture inputs are performed in the first touch sensitive region.
  • the second touch sensitive region may have a number of hard coded buttons that perform predetermined functions when they are activated by performing a button tap gesture but the second sensitive region is also able to detect a number of other gestures from a user.
  • the buttons may be touch sensitive or physical buttons.
  • the present invention provides a method of interpreting gesture input from a user of an electronic device, the method comprising: detecting a gesture input in a first or second touch sensitive region of an electronic device; determining if the gesture input starts or ends in a boundary region that is located between the first or second touch sensitive region, wherein if a start of the gesture input is inside the first or second region and the completion of the gesture input is inside the boundary region, the gesture input is interpreted as being completed in the first region if the gesture input entered the boundary region from the first region, and is interpreted as being completed in the second region if the gesture input entered the boundary region from the second region, and if the start of the gesture input is inside the boundary region and the completion of the gesture input is inside the first or second region, the gesture input is interpreted as starting in the first region if the gesture enters the first region after exiting the boundary region or is interpreted as starting in the second region if the gesture enters the second region after exiting the boundary region; performing a function in response to the interpreted gesture
  • the method may further comprise displaying an output in the first touch sensitive region of the electronic device.
  • the present invention a computer program product carrying a computer program embodied in a computer readable medium adapted to perform the aforementioned method.
  • Fig. 1 is a schematic representation of a mobile telephone, as a first example of an electronic device in which the invention may be implemented;
  • Fig. 2 is an architecture diagram of the Android operating system.
  • Fig. 3 is a simplified schematic representation of a mobile telephone according to a first embodiment of the invention showing different regions on a front surface of the phone in a simplified manner;
  • Figs. 4a and 4b show the predefined region of Fig. 3 shown in more detail
  • Fig. 5a to 5e is a schematic representation of the phone in Fig. 3 showing the different types of gesture input and how these are interpreted by a processor;
  • Figs. 6a to 6f show the different types of gesture that may be performed in the gesture control area of the phone in Fig. 3.
  • Fig. 7a and 7b show diagrams which illustrate the distinction between horizontal and vertical gestures in the gesture control area 14 in the first embodiment.
  • the mobile telephone has evolved significantly over recent years to include more advanced computing ability and additional functionality to the standard telephony functionality and such phones are known as "smartphones".
  • many phones are used for text messaging, Internet browsing and/or email as well as gaming.
  • Touchscreen technology is useful in phones since screen size is limited and touch screen input provides direct manipulation of the items on the display screen such that the area normally required by separate keyboards or numerical keypads is saved and taken up by the touch screen instead.
  • touch input controlled electronic devices such as handheld computers without telephony processors, e-reader devices, tablet PCs and PDAs.
  • Fig. 1 shows an exemplary mobile telephone handset, comprising a wireless communication unit having an antenna 101 , a radio signal transceiver 102 for two-way communications, such as for GSM and UMTS telephony, and a wireless module 103 for other wireless communication protocols such as Wi-Fi.
  • An input unit includes a microphone 104 and a touchscreen 105 provides an input mechanism.
  • An output unit includes a speaker 106 and a display 107 for presenting iconic or textual representations of the phone's functions.
  • Electronic control circuitry includes amplifiers 108 and a number of dedicated chips providing ADC/DAC signal conversion 109, compression/decompression 10, encoding and modulation functions 111 , and circuitry providing connections between these various components, and a microprocessor 112 for handling command and control signalling.
  • memory generally shown as memory unit 113.
  • Random access memory in some cases SDRAM
  • ROM and Flash memory for storing the phone's operating system and other instructions to be executed by each processor.
  • a power supply 114 in the form of a rechargeable battery provides power to the phone's functions.
  • the touchscreen 105 is coupled to the microprocessor 112 such that input on the touchscreen can be interpreted by the processor.
  • SIM card Subscriber Identity Module
  • IMSI user's service-subscriber key
  • GSM Global System for Mobile Communications
  • the SIM card typically stores the user's phone contacts and can store additional data specified by the user, as well as an identification of the user's permitted services and network information.
  • the functions of a mobile telephone are implemented using a combination of hardware and software. In many cases, the decision on whether to implement a particular functionality using electronic hardware or software is a commercial one relating to the ease with which new product versions can be made commercially available and updates can be provided (e.g.
  • a smartphone typically runs an operating system and a large number of applications can run on top of the operating system.
  • the software architecture on a smartphone using Android operating system (owned by Google Inc.), for example, comprises object oriented (Java and some C and C++) applications 200 running on a
  • Java-based application framework 210 and supported by a set of libraries 220 (including Java core libraries 230) and the register-based Dalvik virtual machine 240.
  • the Dalvik Virtual Machine is optimized for resource-constrained devices - i.e. battery powered devices with limited memory and processor speed.
  • Java class files are converted into the compact Dalvik Executable (.dex) format before execution by an
  • the Dalvik VM relies on the Linux operating system kernel for underlying functionality, such as threading and low level memory management.
  • the Android operating system provides support for various hardware such as that described in relation to Fig. 1. The same reference numerals for the same hardware appearing in Fig. 1 and 2 is used. Support can be provided for touchscreens
  • Android supports various connectivity technologies (CDMA, WiFi, UMTS, Bluetooth, WiMax, etc) and SMS text messaging and MMS messaging, as well as the Android Cloud to Device Messaging (C2DM) framework. Support for media
  • a first embodiment is shown of a front surface of a smartphone 10 having a housing comprising a touch sensitive area 11.
  • the touch sensitive area 1 is provided along substantially the entire front surface of the handset 10 except for a boundary around the perimeter of the housing edges.
  • the touch sensitive area 11 has a first zone 12 and a second zone 14.
  • the functionality of the first zone 12 may be different to the functionality of the second zone 14.
  • the first zone is a display screen 12 that is provided on the majority of the front surface and the second zone is a gesture control area 14.
  • the display screen 2 is an area for applications on the phone to use as output but can also receive user input through user interaction with the touch sensitive area 11.
  • buttons 13 are provided in a gesture control area 14 and the buttons 13 can be activated by a tapping user input.
  • the gesture control area 14 is considered herein as an area that receives user input through user interaction with the touch sensitive area 11.
  • the display screen 12 is larger than the gesture control area 14 and the gesture control area 14 does not have a display screen in this embodiment.
  • the buttons are "fixed" in the sense that they are printed on the front surface of the phone and perform predetermined functions. It will be appreciated to the skilled person that a single button may be provided if required instead of a number of buttons.
  • the button can be located anywhere in the gesture control area 14.
  • the user interaction with the touch screen is through the user of gestures that can be executed by a user's finger or any other type of input device that can be detected by the touch sensitive area such as a stylus for example.
  • the gestures can be any type of touch input or user interaction that is detectable by a touch sensitive area and capable of being interpreted by the microprocessor of the phone.
  • the gesture can be a discrete touch or multiple touches of the touch sensitive area, continuous motion along the touch sensitive area (e.g. touch-and-drag operations that move an icon) or a combination thereof.
  • the gestures may be distinguished by determining the location of the input on touch sensitive area and the direction and/or time the gesture is input on the touch sensitive area.
  • a tap gesture can be distinguished from a long press gesture on the basis of the time that an input is held on the touch sensitive area.
  • the display screen 12 displays information to the user of the phone which could be a plurality of graphical user interface objects (not shown) such as icons that can be tapped to activate programs related to the icons.
  • graphical user interface objects such as icons that can be tapped to activate programs related to the icons.
  • a web browser icon could be activated in the display screen 12 to activate a web page.
  • the display screen 12 is capable of receiving a number of other gesture inputs which can invoke functionality on the phone. The gesture inputs can take place wholly within the display screen 12 or begin or end in the display screen 12.
  • the gesture control area 14 is capable of receiving a number of gesture inputs which can invoke predefined functionality on the phone.
  • the gesture inputs can take place wholly within the gesture control area 14 or begin or end in the gesture control area 15.
  • the hard coded buttons 13 are distinguished from each other by icons 13a, 13b, 13c and the icons 13a, 13b, 13c are printed in the various locations of the gesture control area 14 to represent the various keys.
  • Icon 13a relates to a Menu key
  • icon 13b relates to a Home key
  • icon 3c relates to a Back key.
  • the icons may not extend the entire height H of the gesture control area 14 but the tapping above the icons 13a, 13b, 13c in respective extension areas 13ai , 13bi , 13ci still within the gesture control area 14 will cause the phone to carry out the functionality associated with the respective icon 13a, 13b, 13c.
  • buttons 13 will cause the phone to perform predetermined functions such as go to the "Menu” screen , go to the "Home” screen or go “Back” one screen or one space in a text entry screen.
  • the gestures could be a gesture occurring wholly within one of the gesture input areas, from one input area to the other or a gesture that is carried out simultaneously in both gesture input areas. For example, a two finger flick with the first finger in the display screen 12 and the second finger in the gesture control area 14.
  • a long press on the display screen 12 could fix a point on the display screen and another type of gesture such as a drag in the gesture control area 14 may invoke particular functionality about the point on the display screen such as a zoom or rotate.
  • the inventors have realised that there may be problems in distinguishing between the two different gesture input areas when gestures are performed very close to the boundary between the display screen 12 and the gesture control area 14. For example, a user may wish the phone to activate functionality that is associated with a sliding gesture being performed wholly within the gesture control area 14. If a user performs the gesture very near the display screen 12, it could be possible for the microprocessor of the phone to interpret the gesture as a function associated with a sliding gesture from the gesture control area 14 to the display screen 12 i.e. the command associated with a boundary crossing gesture.
  • touch sensitive area is provided with a predefined "demilitarised" boundary region 15 which is a region having the effect of disregarding movement which is part of a gesture input that may occur in the region if the movement begins or ends in the region 15. It is a boundary or threshold that separates the display screen 12 and gesture control area 14 that must be crossed for an action associated with a gesture beginning in the gesture control area 14 and ending it the display screen 12 (or vice versa) to be carried out.
  • the screen 12 and gesture control area 14 are adjacent each other and the boundary region 15 is between the screen 12 and the gesture control area 14 such that the screen 12 and gesture control area 14 could be considered as separated by this region 15. If the gesture does not cross the region 15, it is recognised as a gesture occurring wholly within the zone in which the gesture was started.
  • the region 15 is made up of a number of rows of pixels, n, of the touch sensitive area 1 1 .
  • the region 15 has an upper threshold 15a and lower threshold 15b. If a gesture is started or finished between these thresholds or extremities it can be considered to fall within the region 15.
  • a possible user finger touch T1 is shown in fig. 4a for a gesture that is considered to start within the region 15 since the centre of the finger touch by a user falls just below the upper threshold 15a but the not all of the finger touch (represented by the larger circle) is within the region 15. This can still be interpreted as a touch falling within the region 5. Since the upper threshold 15a is crossed for a gesture in direction A, the gesture is considered to start in the display screen 12 which is located above the region 15a.
  • the entire finger touch T2 falls within the region 15 and starts within this region.
  • a gesture is performed in direction B that crosses the lower threshold 15b and therefore the gesture is considered to start in the gesture control area 14 which is located below the lower threshold 15b. Accordingly, the start of the gesture in the region 15 is disregarded.
  • a completion of a gesture is considered to occur in the display screen 12 or gesture control area 14 depending on the threshold 15a, 15b of the region that is crossed when entering the region 5.
  • the microprocessor of the phone (as mentioned hereinbefore in relation to Fig. 1) will determine the location of the starting position or end position of the gesture and if this falls within the location of the region 15, part of the gesture which is in the region 15 will be ignored. In particular, in this embodiment, it is determined if the start or end of the gesture is inside the region 5. If it is inside, this is interpreted as the entire gesture being not at all in the region 15. The start or end point of the gesture that is in the region 15 is positioned just outside the region 15, in the display screen 12 or gesture control are 14 depending on the path of the gesture and where the gesture entered or exited the region 15. The gesture is "rewritten" as being entirely outside the DMZ, and the appropriate functionality associated with the rewritten gesture is carried out.
  • a variable size boundary region is provided based on a preliminary gesture recognition.
  • the region can be a variable number of pixels, or a single row.
  • the size of the boundary region 15 may depend in the type of gesture trying to be interpreted. Further, the number of rows of pixels for the boundary region may vary across the width of the touch sensitive area 1 if different levels of sensitivity are desired for the region.
  • the region would be smaller in the middle and larger near the edge of the region. That is, the height of the region would be smaller in the middle and larger at the edges.
  • the region 15 may or may not be visible to a user of the phone as a distinct boundary extending across the width or any distance in the touch sensitive area 1 1 between the display screen 12 and the gesture control area 14.
  • Fig. 5a shows, on the left side, a first type of gesture that may be performed on the phone shown in Fig. 3.
  • This gesture is diagonal drag where the fingertip of a user is moved along the surface of the touch sensitive area 1 from the display screen 12 to the gesture control area 14 without losing contact with the screen and this gesture is indicated by dashed arrow C.
  • dashed arrow C As shown on the right side, if a gesture begins within display screen 12, crosses region 5 and ends in gesture control area 14 or vice versa, the functionality associated with the gesture of moving from one area to the other will be carried out since the region 15 has been positively crossed. It will be appreciated that this could be any number of functions and will be dependent on the particular functionality that is required.
  • Fig. 5b shows, on the left side, a second type of gesture. This is a drag upward where the fingertip of a user is moved along the surface of the touch sensitive area 11 from the gesture control area 14 to the display screen 12 without losing contact with the touch sensitive area 1 1 and this gesture is indicated by dashed arrow D. As shown on the right side, the functionality associated with the gesture of moving up from area 1 to the other area 12 will be carried out since the region 15 has been positively crossed.
  • Fig. 5c shows, on the left side, a diagonal drag gesture similar to the gesture in Fig. 5a in that it starts in the display screen 12. However, this is different to Fig. 5a in that the drag (indicated by arrow E) ends in the region 15.
  • the processor determines that the input has commenced in the display screen 12 but has ended in the region 15 and ignores the part of the gesture that has been performed in the region 5.
  • the gesture is judged as a diagonally downwards drag gesture taking place wholly within the display screen 12 as shown on the right side of the figure.
  • the function associated with a diagonally downwards drag gesture in the display screen is performed by the phone and this could be a different function to that which is performed in response to the gesture in Fig. 5a.
  • Fig. 5d shows, on the left side, a drag upward gesture similar to the gesture in Fig. 5a in that it starts in the gesture control area 14. However, this is different to Fig. 5b in that the drag (indicated by arrow F) ends in the region 15.
  • the processor determines that the input has commenced in the gesture control area 14 but has ended in the region 15 and ignores the part of the gesture that has been performed in the region 15.
  • the gesture is judged as an upward drag gesture taking place wholly within the gesture control area 14 as shown on the right side of the figure.
  • the function associated with an upward drag gesture in the gesture control area 14 is performed by the phone and this could be a different function to that which is performed in response to the gesture in Fig. 5b. For example, the crossing of the region 15 from the gesture control area could bring up a keyboard for the user to enter text whereas the upward drag wholly within the gesture control area could invoke another function such as a phone lock screen, for example or no function at all.
  • Fig. 5e shows, on the left side, a drag upward gesture similar to the direction of the gesture in Fig. 5d.
  • the drag (indicated by arrow G) starts in the region 15 and ends in the display screen 12.
  • the processor determines that the input has commenced in the region 15 and has ended in the display screen 12 and ignores the part of the gesture that has been performed in the region 15.
  • the gesture is judged as an upward drag gesture taking place wholly within the display screen 12 as shown on the right side of the figure.
  • the function associated with an upward drag gesture in the display screen 12 is performed by the phone and this could be a different function to that which is performed in response to the gesture in Fig. 5d.
  • a boundary region 15 is useful to ensure that the phone does not inadvertently perform a function associated with a command gesture of Fig. 5b, for example, when the intention of the user is for the phone to carry out a function associated with a command gesture shown in Fig. 5d, for example.
  • the boundary region 15 is useful where there are two touch screen areas and gestures are being performed near the boundary between the two touch screen areas.
  • the boundary region 15 provides a tolerance for touch input gestures in a touch sensitive device with two distinct touch input areas and results in an improved and more reliable user interaction with the electronic device.
  • gestures can be performed in the display screen 12 and the gesture control area 14.
  • the following gestures can be recognised when input with a finger or other input means which can provide the appropriate functionality: tap; press; drag (single and double finger); flick (single and double finger); pinch; and spread.
  • the various types of gesture that can be performed in the gesture control area 14 are explained further below with reference to Figs. 6 and 7.
  • the processor will compare the gesture that is carried out with the predetermined types of gesture that are recognisable and execute the relevant functionality associated with the gesture if the gesture is recognised.
  • Fig. 6a Briefly touch the surface of the gesture control area 14 with a fingertip. Recognised as a tap as long as finger is removed from the surface before a predetermined time (eg. 1 .49 seconds). Movement is tolerant to a predetermined diameter (eg. 1 cm) from original recognition point of touch input.
  • Press Fig. 6b: Touch the surface of the gesture control area 14 with a fingertip for an extended period of time. Recognised as a press as long as finger is held on surface for at least a predetermined time (eg. 1.5 seconds). Movement is tolerant to a predetermined diameter (eg. 1 cm) from original recognition point of touch input..
  • Drag (Fig. 6c): Move fingertip over the gesture control area 14 without losing contact. Can be performed in directions left, right, up. Horizontal movement recognised staying between 50 degrees to 130 degrees and 230 degrees to 310 degrees. Vertical movement recognised between 315 degrees and 345 degrees. Movement should be greater than the tolerance for tap and press movement i.e greater than 1 cm.
  • Flick Quickly brush gesture control area 14 with fingertip. Can be performed in directions left, right, up. Horizontal movement recognised staying between 50 degrees to 130 degrees and 230 degrees to 310 degrees. Vertical movement recognised between 315 degrees and 345 degrees. Movement should be greater than the tolerance for tap and press movement i.e greater than 1 cm.
  • Two finger drag (Fig. 6e): Move two fingertips over the gesture control area 14 without losing contact. Can be performed in up direction. Two fingers detected with movements between 315 degrees and 345 degrees.
  • Two finger flick (Fig. 6f): Quickly brush gesture control area 14 with two fingertips. Can be performed in up direction. Two fingers detected with movements between 315 degrees and 345 degrees.
  • a drag gesture indicated by solid arrow G which is at an angle of 20 degrees would be interpreted as a vertical movement upwards since the movement falls between 315 degrees and 45 degrees.
  • Fig. 7b shows that a drag gesture indicated by solid arrow H which is at an angle of 80 degrees would be interpreted as a horizontal movement right since the movement falls between 50 degrees and 130 degrees. Although not shown, it will be understood that a movement at an angle of between 230 degrees and 310 degrees would be considered a horizontal movement left.
  • the thresholds for the tolerance can be varied depending on the tolerance required. However, it is found that for the gesture control area of this embodiment, the angles set as tolerances are appropriate for producing desirable results. Since the gesture control area 14 is of a small size such as a strip having the height of a few pixels and is only used for input rather than display of graphical objects that continuously change and may need to be moved, tolerances can be larger compared to the display screen 12 where fine gesture control may be required and the tolerances would be lower.
  • the two adjoining distinct zones need not be part of the same touch sensitive area or panel as each could still function in the desired manner if each is associated with a different touch sensitive area or panel and the appropriate modifications are made in order to recognise gesture inputs from each zone.
  • a boundary region is provided in the adjoining boundary of the zones in which part of a gesture that starts or ends in the region is not recognised as part of the overall gesture when interpreted.
  • the boundary region will comprise part of the first zone that is near the second zone and part of the second zone that is near the first zone. This could be a variable number of pixels of each zone which would be considered the boundary region.
  • the processor referred to herein may comprise a data processing unit and associated program code to control the performance of operations by the processor.
  • buttons 13 in the gesture control area may be dynamic and change with context rather than being fixed in functionality as in the first embodiment.
  • the buttons in the gesture control area 14 may be a graphical object representing a virtual button that is activated with a finger tap that may or may not provide haptic feedback to a user when the button is pressed and/or may be a physical button.
  • the second zone may be positioned in a different location on the touch sensitive area with respect to the first zone of the electronic device.
  • the second zone may be positioned above the first zone.
  • a boundary region can still be provided between the first and second zones in such a configuration or other configurations.

Abstract

The present invention relates to an electronic device for receiving gesture input. The device comprises: a first touch sensitive region for receiving user gesture input and a second touch sensitive region for receiving user gesture input, a processor for interpreting the user gesture input; a boundary region located between the first touch sensitive region and second touch sensitive region, wherein if a start of the gesture input is inside the first or second touch sensitive region and the completion of the gesture input is inside the boundary region, the processor is adapted to interpret that the gesture has been completed in the first or second touch sensitive region from which the gesture entered the boundary region, and if the start of the gesture input is inside the boundary region and the completion of the gesture input is inside the first or second touch sensitive region, the processor is adapted to interpret that the gesture input has started in the first or second touch sensitive region in which the gesture enters after exiting the boundary region. The present invention also provides a method of interpreting gesture input from a user of an electronic device and computer program product carrying a computer program embodied in a computer readable medium adapted to perform the method.

Description

Apparatus and Method for Improved User Interaction in Electronic Devices
The present invention relates to apparatus, methods and computer programs providing improved user interaction with portable electronic devices.
Many mobile telephones, personal digital assistants (PDAs) and other portable electronic devices feature touch sensitive areas that accept user input and this provides ease of use by enabling intuitive user interaction with the device. Touch-sensitive areas are often implemented to allow users to select an item by finger touch or other similar input such as a stylus, and can initiate an operation by gestures, avoiding the need for a separate data input apparatus such as a mouse or trackpad. Various technologies have been used to implement touch-sensitivity, including capacitive touch sensors which measure a change in capacitance resulting from the effect of touching the screen, resistive touch sensitive areas that measure a change in electrical current resulting from pressure on the touch sensitive areas reducing the gap between conductive layers, and other technologies.
It is known to have touch sensitive areas on a phone display screen which displays graphical user interface objects such as virtual buttons that may change depending on the application that is running on the phone. The functionality associated with the virtual buttons can be invoked through direct manipulation of the buttons by performing a certain type of gesture on the location of the touch sensitive area where the virtual button is located. One type of gesture is a user tapping the location of the virtual button on the screen thereby activating the virtual button. Also, many current touch sensitive areas are capable of accepting and recognizing particular gestures in addition to a tap gesture.
WO 2009137419 relates to a touch-sensitive display screen that is enhanced by a touch-sensitive control area that extends beyond the edges of the display screen. The touch-sensitive area outside the display screen, referred to as a "gesture area," allows a user to activate commands using a gesture vocabulary. It allows some commands to be activated by inputting a gesture within the gesture area. Other commands can be activated by directly manipulating on-screen objects. According to this document, yet other commands can be activated by beginning a gesture within the gesture area, and finishing it on the screen (or vice versa), and/or by performing input that involves contemporaneous contact with both the gesture area and the screen. There are certain issues relating to electronic devices with touch sensitive areas where there is limited space to accept gestures from a user and to provide invocation of the correct function when certain gestures are performed. In view of the limited size of a display screen and an input such as a fingertip, it can be difficult for a user to carry out a gesture which is intended to represent a particular command.
From a first aspect, the present invention provides an electronic device for receiving gesture input, the device comprising: a first touch sensitive region for receiving user gesture input and a second touch sensitive region for receiving user gesture input; a processor for interpreting the user gesture input; a boundary region located between the first touch sensitive region and second touch sensitive region, wherein if a start of the gesture input is inside the first or second touch sensitive region and the completion of the gesture input is inside the boundary region, the processor is adapted to interpret that the gesture has been completed in the first or second touch sensitive region from which the gesture entered the boundary region, and if the start of the gesture input is inside the boundary region and the completion of the gesture input is inside the first or second touch sensitive region, the processor is adapted to interpret that the gesture input has started in the first or second touch sensitive region in which the gesture enters after exiting the boundary region. This means that there is less likelihood of a user inadvertently causing the mobile device to perform a function associated with a gesture from the first touch sensitive region to the second touch sensitive region such as a finger sliding gesture, when the intention is to perform a gesture only associated with the first touch sensitive region; or a gesture from the second touch sensitive region to the first touch sensitive region such as a finger sliding gesture, when the intention is to perform a gesture only associated with the second touch sensitive region. If a positive crossing of the boundary region is detected, a command associated with a gesture crossing a boundary is carried out. If a gesture input is performed that starts- and ends in the boundary region without exiting the boundary region, such a gesture input may be interpreted as not a recognised gesture and may not result in a command being carried out. If a gesture input is performed that starts and ends in the boundary region but exits the boundary region, this may be recognised as a gesture starting or completing in the first or second touch sensitive region in a similar manner as explained in relation to the above first aspect.
In one embodiment, if the start of the gesture input is inside the first touch sensitive region, crosses the boundary region, and is completed in the second touch sensitive region, the processor is adapted to interpret that the gesture started in the first touch sensitive region and completed in the second touch sensitive region, and if the start of the gesture input inside the second touch sensitive region, crosses the boundary region, and is completed in the first touch sensitive region, the processor is adapted to interpret that the gesture started in the second touch sensitive region and completed in the first touch sensitive region.
The first touch sensitive region may have a different function to the second touch sensitive region.
Further, the boundary region may be a variable number of pixels and based on the gesture input to the first or second touch sensitive region. Alternatively, the boundary region has a height that is a predetermined number of pixels. The boundary region may have a first boundary threshold and a second boundary threshold, and the processor can be adapted to determine where the gesture input is to be regarded as occurring on the basis of which boundary threshold is crossed by the gesture input.
In another embodiment, the first touch sensitive region is a display screen. The second touch sensitive region may be adapted to receive gesture inputs which are interpreted by the processor to carry out commands on the device that are different to if the same gesture inputs are performed in the first touch sensitive region. The second touch sensitive region may have a number of hard coded buttons that perform predetermined functions when they are activated by performing a button tap gesture but the second sensitive region is also able to detect a number of other gestures from a user. The buttons may be touch sensitive or physical buttons.
From a second aspect, the present invention provides a method of interpreting gesture input from a user of an electronic device, the method comprising: detecting a gesture input in a first or second touch sensitive region of an electronic device; determining if the gesture input starts or ends in a boundary region that is located between the first or second touch sensitive region, wherein if a start of the gesture input is inside the first or second region and the completion of the gesture input is inside the boundary region, the gesture input is interpreted as being completed in the first region if the gesture input entered the boundary region from the first region, and is interpreted as being completed in the second region if the gesture input entered the boundary region from the second region, and if the start of the gesture input is inside the boundary region and the completion of the gesture input is inside the first or second region, the gesture input is interpreted as starting in the first region if the gesture enters the first region after exiting the boundary region or is interpreted as starting in the second region if the gesture enters the second region after exiting the boundary region; performing a function in response to the interpreted gesture input.
The method may further comprise displaying an output in the first touch sensitive region of the electronic device. From a third aspect, the present invention a computer program product carrying a computer program embodied in a computer readable medium adapted to perform the aforementioned method.
Embodiments of the invention are described below in more detail, by way of example, with reference to the accompanying drawings in which:
Fig. 1 is a schematic representation of a mobile telephone, as a first example of an electronic device in which the invention may be implemented; Fig. 2 is an architecture diagram of the Android operating system.
Fig. 3 is a simplified schematic representation of a mobile telephone according to a first embodiment of the invention showing different regions on a front surface of the phone in a simplified manner;
Figs. 4a and 4b show the predefined region of Fig. 3 shown in more detail; Fig. 5a to 5e is a schematic representation of the phone in Fig. 3 showing the different types of gesture input and how these are interpreted by a processor;
Figs. 6a to 6f show the different types of gesture that may be performed in the gesture control area of the phone in Fig. 3.
Fig. 7a and 7b show diagrams which illustrate the distinction between horizontal and vertical gestures in the gesture control area 14 in the first embodiment.
The mobile telephone has evolved significantly over recent years to include more advanced computing ability and additional functionality to the standard telephony functionality and such phones are known as "smartphones". In particular, many phones are used for text messaging, Internet browsing and/or email as well as gaming. Touchscreen technology is useful in phones since screen size is limited and touch screen input provides direct manipulation of the items on the display screen such that the area normally required by separate keyboards or numerical keypads is saved and taken up by the touch screen instead. Although the embodiments of the invention will now be described in relation to handheld smartphones, some aspects of the invention could be adapted for use in other touch input controlled electronic devices such as handheld computers without telephony processors, e-reader devices, tablet PCs and PDAs.
Fig. 1 shows an exemplary mobile telephone handset, comprising a wireless communication unit having an antenna 101 , a radio signal transceiver 102 for two-way communications, such as for GSM and UMTS telephony, and a wireless module 103 for other wireless communication protocols such as Wi-Fi. An input unit includes a microphone 104 and a touchscreen 105 provides an input mechanism. An output unit includes a speaker 106 and a display 107 for presenting iconic or textual representations of the phone's functions. Electronic control circuitry includes amplifiers 108 and a number of dedicated chips providing ADC/DAC signal conversion 109, compression/decompression 10, encoding and modulation functions 111 , and circuitry providing connections between these various components, and a microprocessor 112 for handling command and control signalling. Associated with the specific processors is memory generally shown as memory unit 113. Random access memory (in some cases SDRAM) is provided for storing data to be processed, and ROM and Flash memory for storing the phone's operating system and other instructions to be executed by each processor. A power supply 114 in the form of a rechargeable battery provides power to the phone's functions. The touchscreen 105 is coupled to the microprocessor 112 such that input on the touchscreen can be interpreted by the processor. These features are well known in the art and will not be described in more detail herein.
In addition to integral RAM and ROM, a small amount of storage capacity is provided by the telephone handset's Subscriber Identity Module (SIM card) 115, which stores the user's service-subscriber key (IMSI) that is needed by GSM telephony service providers and handling authentication. The SIM card typically stores the user's phone contacts and can store additional data specified by the user, as well as an identification of the user's permitted services and network information. As with most other electronic devices, the functions of a mobile telephone are implemented using a combination of hardware and software. In many cases, the decision on whether to implement a particular functionality using electronic hardware or software is a commercial one relating to the ease with which new product versions can be made commercially available and updates can be provided (e.g. via software downloads) balanced against the speed and reliability of execution (which can be faster using dedicated hardware), rather than because of a fundamental technical distinction. The term 'logic' is used herein to refer to hardware and/or software implementing functions of an electronic device. Where either software or hardware is referred to explicitly in the context of a particular embodiment of the invention, the reader will recognize that alternative software and hardware implementations are also possible to achieve the desired technical effects, and this specification should be interpreted accordingly.
5
A smartphone typically runs an operating system and a large number of applications can run on top of the operating system. As shown in Figure 2, the software architecture on a smartphone using Android operating system (owned by Google Inc.), for example, comprises object oriented (Java and some C and C++) applications 200 running on a
10 Java-based application framework 210 and supported by a set of libraries 220 (including Java core libraries 230) and the register-based Dalvik virtual machine 240. The Dalvik Virtual Machine is optimized for resource-constrained devices - i.e. battery powered devices with limited memory and processor speed. Java class files are converted into the compact Dalvik Executable (.dex) format before execution by an
15 instance of the virtual machine. The Dalvik VM relies on the Linux operating system kernel for underlying functionality, such as threading and low level memory management. The Android operating system provides support for various hardware such as that described in relation to Fig. 1. The same reference numerals for the same hardware appearing in Fig. 1 and 2 is used. Support can be provided for touchscreens
20 105, GPS navigation, cameras (still and video) and other hardware, as well as including an integral Web browser and graphics support and support for media playback in various formats. Android supports various connectivity technologies (CDMA, WiFi, UMTS, Bluetooth, WiMax, etc) and SMS text messaging and MMS messaging, as well as the Android Cloud to Device Messaging (C2DM) framework. Support for media
25 streaming is provided by various plug-ins, and a lightweight relational database (SQLite) provides structured storage management. With a software development kit including various development tools, many new applications are being developed for the Android OS. Currently available Android phones include a wide variety of screen sizes, processor types and memory provision, from a large number of manufacturers.
30 Which features of the operating system are exploited depends on the particular mobile device hardware. Referring to Fig. 3, a first embodiment is shown of a front surface of a smartphone 10 having a housing comprising a touch sensitive area 11. The touch sensitive area 1 is provided along substantially the entire front surface of the handset 10 except for a boundary around the perimeter of the housing edges. The touch sensitive area 11 has a first zone 12 and a second zone 14. The functionality of the first zone 12 may be different to the functionality of the second zone 14. In this embodiment, the first zone is a display screen 12 that is provided on the majority of the front surface and the second zone is a gesture control area 14. The display screen 2 is an area for applications on the phone to use as output but can also receive user input through user interaction with the touch sensitive area 11. A plurality of hard coded buttons 13 are provided in a gesture control area 14 and the buttons 13 can be activated by a tapping user input. The gesture control area 14 is considered herein as an area that receives user input through user interaction with the touch sensitive area 11. The display screen 12 is larger than the gesture control area 14 and the gesture control area 14 does not have a display screen in this embodiment. The buttons are "fixed" in the sense that they are printed on the front surface of the phone and perform predetermined functions. It will be appreciated to the skilled person that a single button may be provided if required instead of a number of buttons. The button can be located anywhere in the gesture control area 14.
The user interaction with the touch screen is through the user of gestures that can be executed by a user's finger or any other type of input device that can be detected by the touch sensitive area such as a stylus for example. The gestures can be any type of touch input or user interaction that is detectable by a touch sensitive area and capable of being interpreted by the microprocessor of the phone. The gesture can be a discrete touch or multiple touches of the touch sensitive area, continuous motion along the touch sensitive area (e.g. touch-and-drag operations that move an icon) or a combination thereof. The gestures may be distinguished by determining the location of the input on touch sensitive area and the direction and/or time the gesture is input on the touch sensitive area. For example, a tap gesture can be distinguished from a long press gesture on the basis of the time that an input is held on the touch sensitive area. The display screen 12 displays information to the user of the phone which could be a plurality of graphical user interface objects (not shown) such as icons that can be tapped to activate programs related to the icons. For example, a web browser icon could be activated in the display screen 12 to activate a web page. In addition, the display screen 12 is capable of receiving a number of other gesture inputs which can invoke functionality on the phone. The gesture inputs can take place wholly within the display screen 12 or begin or end in the display screen 12.
Similarly to the display screen 12, the gesture control area 14 is capable of receiving a number of gesture inputs which can invoke predefined functionality on the phone. The gesture inputs can take place wholly within the gesture control area 14 or begin or end in the gesture control area 15.
In this embodiment, the hard coded buttons 13 are distinguished from each other by icons 13a, 13b, 13c and the icons 13a, 13b, 13c are printed in the various locations of the gesture control area 14 to represent the various keys. Icon 13a relates to a Menu key, icon 13b relates to a Home key and icon 3c relates to a Back key. The icons may not extend the entire height H of the gesture control area 14 but the tapping above the icons 13a, 13b, 13c in respective extension areas 13ai , 13bi , 13ci still within the gesture control area 14 will cause the phone to carry out the functionality associated with the respective icon 13a, 13b, 13c. In this way, the buttons 13 will cause the phone to perform predetermined functions such as go to the "Menu" screen , go to the "Home" screen or go "Back" one screen or one space in a text entry screen. With two different gesture input areas (display screen 12 and gesture control area 14), it is possible to provide more functionality than if there was a single area for accepting gestures such as conventional touch screens with a display. The gestures could be a gesture occurring wholly within one of the gesture input areas, from one input area to the other or a gesture that is carried out simultaneously in both gesture input areas. For example, a two finger flick with the first finger in the display screen 12 and the second finger in the gesture control area 14. In another example, a long press on the display screen 12 could fix a point on the display screen and another type of gesture such as a drag in the gesture control area 14 may invoke particular functionality about the point on the display screen such as a zoom or rotate.
The inventors have realised that there may be problems in distinguishing between the two different gesture input areas when gestures are performed very close to the boundary between the display screen 12 and the gesture control area 14. For example, a user may wish the phone to activate functionality that is associated with a sliding gesture being performed wholly within the gesture control area 14. If a user performs the gesture very near the display screen 12, it could be possible for the microprocessor of the phone to interpret the gesture as a function associated with a sliding gesture from the gesture control area 14 to the display screen 12 i.e. the command associated with a boundary crossing gesture.
In order to address this possible drawback, as shown in Fig. 3, touch sensitive area is provided with a predefined "demilitarised" boundary region 15 which is a region having the effect of disregarding movement which is part of a gesture input that may occur in the region if the movement begins or ends in the region 15. It is a boundary or threshold that separates the display screen 12 and gesture control area 14 that must be crossed for an action associated with a gesture beginning in the gesture control area 14 and ending it the display screen 12 (or vice versa) to be carried out. The screen 12 and gesture control area 14 are adjacent each other and the boundary region 15 is between the screen 12 and the gesture control area 14 such that the screen 12 and gesture control area 14 could be considered as separated by this region 15. If the gesture does not cross the region 15, it is recognised as a gesture occurring wholly within the zone in which the gesture was started.
Referring also to Fig 4a and 4b which show a more detailed version of the region 5 of Fig. 3, the region 15 is made up of a number of rows of pixels, n, of the touch sensitive area 1 1 . The region 15 has an upper threshold 15a and lower threshold 15b. If a gesture is started or finished between these thresholds or extremities it can be considered to fall within the region 15. A possible user finger touch T1 is shown in fig. 4a for a gesture that is considered to start within the region 15 since the centre of the finger touch by a user falls just below the upper threshold 15a but the not all of the finger touch (represented by the larger circle) is within the region 15. This can still be interpreted as a touch falling within the region 5. Since the upper threshold 15a is crossed for a gesture in direction A, the gesture is considered to start in the display screen 12 which is located above the region 15a.
In fig. 4b, the entire finger touch T2 falls within the region 15 and starts within this region. A gesture is performed in direction B that crosses the lower threshold 15b and therefore the gesture is considered to start in the gesture control area 14 which is located below the lower threshold 15b. Accordingly, the start of the gesture in the region 15 is disregarded. In a similar manner, a completion of a gesture is considered to occur in the display screen 12 or gesture control area 14 depending on the threshold 15a, 15b of the region that is crossed when entering the region 5.
The microprocessor of the phone (as mentioned hereinbefore in relation to Fig. 1) will determine the location of the starting position or end position of the gesture and if this falls within the location of the region 15, part of the gesture which is in the region 15 will be ignored. In particular, in this embodiment, it is determined if the start or end of the gesture is inside the region 5. If it is inside, this is interpreted as the entire gesture being not at all in the region 15. The start or end point of the gesture that is in the region 15 is positioned just outside the region 15, in the display screen 12 or gesture control are 14 depending on the path of the gesture and where the gesture entered or exited the region 15. The gesture is "rewritten" as being entirely outside the DMZ, and the appropriate functionality associated with the rewritten gesture is carried out. It will be appreciated by the skilled person that other possible methods can be provided to carry out the same type of detection and this is only one example. The steps may vary, for example, where a variable size boundary region is provided based on a preliminary gesture recognition. It will be appreciated that although the embodiment in Fig. 3 and 4 refers to a number of rows of pixels, in other embodiments, the region can be a variable number of pixels, or a single row. The size of the boundary region 15 may depend in the type of gesture trying to be interpreted. Further, the number of rows of pixels for the boundary region may vary across the width of the touch sensitive area 1 if different levels of sensitivity are desired for the region. For example, if it is required that there should be a higher sensitivity to a crossing of the boundary region in the middle of the region, but less so at the outside, the region would be smaller in the middle and larger near the edge of the region. That is, the height of the region would be smaller in the middle and larger at the edges.
The region 15 may or may not be visible to a user of the phone as a distinct boundary extending across the width or any distance in the touch sensitive area 1 1 between the display screen 12 and the gesture control area 14.
Fig. 5a shows, on the left side, a first type of gesture that may be performed on the phone shown in Fig. 3. This gesture is diagonal drag where the fingertip of a user is moved along the surface of the touch sensitive area 1 from the display screen 12 to the gesture control area 14 without losing contact with the screen and this gesture is indicated by dashed arrow C. As shown on the right side, if a gesture begins within display screen 12, crosses region 5 and ends in gesture control area 14 or vice versa, the functionality associated with the gesture of moving from one area to the other will be carried out since the region 15 has been positively crossed. It will be appreciated that this could be any number of functions and will be dependent on the particular functionality that is required.
Fig. 5b shows, on the left side, a second type of gesture. This is a drag upward where the fingertip of a user is moved along the surface of the touch sensitive area 11 from the gesture control area 14 to the display screen 12 without losing contact with the touch sensitive area 1 1 and this gesture is indicated by dashed arrow D. As shown on the right side, the functionality associated with the gesture of moving up from area 1 to the other area 12 will be carried out since the region 15 has been positively crossed.
Again, this could be any number of functions as in Fig. 5a. Fig. 5c shows, on the left side, a diagonal drag gesture similar to the gesture in Fig. 5a in that it starts in the display screen 12. However, this is different to Fig. 5a in that the drag (indicated by arrow E) ends in the region 15. In such a case, the processor determines that the input has commenced in the display screen 12 but has ended in the region 15 and ignores the part of the gesture that has been performed in the region 5. The gesture is judged as a diagonally downwards drag gesture taking place wholly within the display screen 12 as shown on the right side of the figure. The function associated with a diagonally downwards drag gesture in the display screen is performed by the phone and this could be a different function to that which is performed in response to the gesture in Fig. 5a.
Fig. 5d shows, on the left side, a drag upward gesture similar to the gesture in Fig. 5a in that it starts in the gesture control area 14. However, this is different to Fig. 5b in that the drag (indicated by arrow F) ends in the region 15. In such a case, the processor determines that the input has commenced in the gesture control area 14 but has ended in the region 15 and ignores the part of the gesture that has been performed in the region 15. The gesture is judged as an upward drag gesture taking place wholly within the gesture control area 14 as shown on the right side of the figure. The function associated with an upward drag gesture in the gesture control area 14 is performed by the phone and this could be a different function to that which is performed in response to the gesture in Fig. 5b. For example, the crossing of the region 15 from the gesture control area could bring up a keyboard for the user to enter text whereas the upward drag wholly within the gesture control area could invoke another function such as a phone lock screen, for example or no function at all.
Fig. 5e shows, on the left side, a drag upward gesture similar to the direction of the gesture in Fig. 5d. However, this is different to Fig. 5d in that the drag (indicated by arrow G) starts in the region 15 and ends in the display screen 12. In such a case, the processor determines that the input has commenced in the region 15 and has ended in the display screen 12 and ignores the part of the gesture that has been performed in the region 15. The gesture is judged as an upward drag gesture taking place wholly within the display screen 12 as shown on the right side of the figure. The function associated with an upward drag gesture in the display screen 12 is performed by the phone and this could be a different function to that which is performed in response to the gesture in Fig. 5d. Accordingly, the presence of a boundary region 15 is useful to ensure that the phone does not inadvertently perform a function associated with a command gesture of Fig. 5b, for example, when the intention of the user is for the phone to carry out a function associated with a command gesture shown in Fig. 5d, for example. The boundary region 15 is useful where there are two touch screen areas and gestures are being performed near the boundary between the two touch screen areas. The boundary region 15 provides a tolerance for touch input gestures in a touch sensitive device with two distinct touch input areas and results in an improved and more reliable user interaction with the electronic device. Although the drag gesture is described in relation to the embodiment in Fig. 5a to 5e, it will be appreciated that any number of gestures can be performed in the display screen 12 and the gesture control area 14. For example, the following gestures can be recognised when input with a finger or other input means which can provide the appropriate functionality: tap; press; drag (single and double finger); flick (single and double finger); pinch; and spread.
The various types of gesture that can be performed in the gesture control area 14 are explained further below with reference to Figs. 6 and 7. The processor will compare the gesture that is carried out with the predetermined types of gesture that are recognisable and execute the relevant functionality associated with the gesture if the gesture is recognised.
Tap (Fig. 6a): Briefly touch the surface of the gesture control area 14 with a fingertip. Recognised as a tap as long as finger is removed from the surface before a predetermined time (eg. 1 .49 seconds). Movement is tolerant to a predetermined diameter (eg. 1 cm) from original recognition point of touch input. Press (Fig. 6b): Touch the surface of the gesture control area 14 with a fingertip for an extended period of time. Recognised as a press as long as finger is held on surface for at least a predetermined time (eg. 1.5 seconds). Movement is tolerant to a predetermined diameter (eg. 1 cm) from original recognition point of touch input..
Drag (Fig. 6c): Move fingertip over the gesture control area 14 without losing contact. Can be performed in directions left, right, up. Horizontal movement recognised staying between 50 degrees to 130 degrees and 230 degrees to 310 degrees. Vertical movement recognised between 315 degrees and 345 degrees. Movement should be greater than the tolerance for tap and press movement i.e greater than 1 cm.
Flick (Fig. 6d): Quickly brush gesture control area 14 with fingertip. Can be performed in directions left, right, up. Horizontal movement recognised staying between 50 degrees to 130 degrees and 230 degrees to 310 degrees. Vertical movement recognised between 315 degrees and 345 degrees. Movement should be greater than the tolerance for tap and press movement i.e greater than 1 cm.
Two finger drag (Fig. 6e): Move two fingertips over the gesture control area 14 without losing contact. Can be performed in up direction. Two fingers detected with movements between 315 degrees and 345 degrees.
Two finger flick (Fig. 6f): Quickly brush gesture control area 14 with two fingertips. Can be performed in up direction. Two fingers detected with movements between 315 degrees and 345 degrees.
Pinch (Fig. 6g): Touch the gesture control area 14 with two fingers and bring them closer together. Two fingers recognised on- the surface moving towards each other horizontally. Spread (Fig. 6h): Touch the gesture control area 14 with two fingers and move them apart. Two fingers recognised on the surface moving away from each other horizontally. As suggested above, the distinction between a flick and a drag will be defined by a speed of movement of the gesture. The differentiation between vertical and horizontal movement will be distinguished by the angle of movement. The gesture control area is relatively small compared to the display screen so angles are such that the different gestures can be easily recognised by the phone processor.
In the this embodiment, as shown in Fig. 7a, a drag gesture indicated by solid arrow G which is at an angle of 20 degrees would be interpreted as a vertical movement upwards since the movement falls between 315 degrees and 45 degrees.
Fig. 7b shows that a drag gesture indicated by solid arrow H which is at an angle of 80 degrees would be interpreted as a horizontal movement right since the movement falls between 50 degrees and 130 degrees. Although not shown, it will be understood that a movement at an angle of between 230 degrees and 310 degrees would be considered a horizontal movement left.
It will be appreciated that the thresholds for the tolerance can be varied depending on the tolerance required. However, it is found that for the gesture control area of this embodiment, the angles set as tolerances are appropriate for producing desirable results. Since the gesture control area 14 is of a small size such as a strip having the height of a few pixels and is only used for input rather than display of graphical objects that continuously change and may need to be moved, tolerances can be larger compared to the display screen 12 where fine gesture control may be required and the tolerances would be lower.
It will be appreciated by those skilled in the art that instead of a single touch sensitive area with two adjoining zones having different functionality (display screen and gesture control area in the first embodiment), the two adjoining distinct zones need not be part of the same touch sensitive area or panel as each could still function in the desired manner if each is associated with a different touch sensitive area or panel and the appropriate modifications are made in order to recognise gesture inputs from each zone. In such a modification, a boundary region is provided in the adjoining boundary of the zones in which part of a gesture that starts or ends in the region is not recognised as part of the overall gesture when interpreted. At the point where the two zones are adjoined, the boundary region will comprise part of the first zone that is near the second zone and part of the second zone that is near the first zone. This could be a variable number of pixels of each zone which would be considered the boundary region.
The processor referred to herein may comprise a data processing unit and associated program code to control the performance of operations by the processor.
It will be appreciated that in another embodiment, the buttons 13 in the gesture control area may be dynamic and change with context rather than being fixed in functionality as in the first embodiment. The buttons in the gesture control area 14 may be a graphical object representing a virtual button that is activated with a finger tap that may or may not provide haptic feedback to a user when the button is pressed and/or may be a physical button.
In another embodiment, the second zone may be positioned in a different location on the touch sensitive area with respect to the first zone of the electronic device. For example, the second zone may be positioned above the first zone. A boundary region can still be provided between the first and second zones in such a configuration or other configurations. In addition to the embodiments of the invention described in detail above, the skilled person will recognize that various features described herein can be modified and combined with additional features, and the resulting additional embodiments of the invention are also within the scope of the accompanying claims.

Claims

s 1. An electronic device for receiving gesture input, the device comprising:
a first touch sensitive region for receiving user gesture input and a second touch sensitive region for receiving user gesture input,
a processor for interpreting the user gesture input;
a boundary region located between the first touch sensitive region and second touch sensitive region,
wherein if a start of the gesture input is inside the first or second touch sensitive region and the completion of the gesture input is inside the boundary region, the processor is adapted to interpret that the gesture has been completed in the first or second touch sensitive region from which the gesture entered the boundary region, and if the start of the gesture input is inside the boundary region and the completion of the gesture input is inside the first or second touch sensitive region, the processor is adapted to interpret that the gesture input has started in the first or second touch sensitive region in which the gesture enters after exiting the boundary region.
2. The electronic device according to claim 1 wherein, if the start of the gesture input is inside the first touch sensitive region, crosses the boundary region, and is completed in the second touch sensitive region, the processor is adapted to interpret that the gesture started in the first touch sensitive region and completed in the second touch sensitive region, and if the start of the gesture input inside the second touch sensitive region, crosses the boundary region, and is completed in the first touch sensitive region, the processor is adapted to interpret that the gesture started in the second touch sensitive region and completed in the first touch sensitive region.'
3. The electronic device according to claim 1 or 2, wherein the first touch sensitive region has a different function to the second touch sensitive region.
4. The electronic device according to any preceding claim wherein the boundary region is a variable number of pixels and is based on the gesture input to the first or second touch sensitive region.
5. The electronic device according to any one of claims 1 to 3, wherein the boundary region has a height that is a predetermined number of pixels.
6. The electronic device according to claim 5, wherein the boundary region has a first boundary threshold and a second boundary threshold, and the processor is adapted to determine where the gesture input is to be regarded as occurring on the basis of which boundary threshold is crossed by the gesture input.
7. The electronic device according to any preceding claim, wherein the first touch sensitive region is a display screen.
8. The electronic device according to any preceding claim, wherein the second touch sensitive region comprises at least one button.
9. The electronic device according to claim 8, wherein the button is a touch sensitive button that performs a predetermined functionality.
10. The electronic device of any preceding claim, wherein the second touch sensitive region is adapted to receive gesture inputs which are interpreted by the processor to carry out commands on the device that are different to if the same gesture inputs are performed in the first touch sensitive region.
11. A method of interpreting gesture input from a user of an electronic device, the method comprising:
detecting a gesture input in a first or second touch sensitive region of an electronic device;
determining if the gesture input starts or ends in a boundary region that is located between the first or second touch sensitive region, wherein
if a start of the gesture input is inside the first or second region and the completion of the gesture input is inside the boundary region, the gesture input is interpreted as being completed in the first region if the gesture input entered the boundary region from the first region, and is interpreted as being completed in the second region if the gesture input entered the boundary region from the second region, and
if the start of the gesture input is inside the boundary region and the completion of the gesture input is inside the first or second region, the gesture input is interpreted as starting in the first region if the gesture enters the first region after exiting the boundary region or is interpreted as starting in the second region if the gesture enters the second region after exiting the boundary region;
performing a function in response to the interpreted gesture input.
12. A computer program product carrying a computer program embodied in a computer readable medium adapted to perform the method according to claim 1 1 .
13. An electronic device, method or computer program product as hereinbefore described with reference to the accompany drawings.
PCT/GB2012/000054 2011-01-21 2012-01-20 Apparatus and method for improved user interaction in electronic devices WO2012098361A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1101133.5 2011-01-21
GB201101133A GB2487425A (en) 2011-01-21 2011-01-21 Gesture input on a device a first and second touch sensitive area and a boundary region

Publications (1)

Publication Number Publication Date
WO2012098361A1 true WO2012098361A1 (en) 2012-07-26

Family

ID=43769474

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2012/000054 WO2012098361A1 (en) 2011-01-21 2012-01-20 Apparatus and method for improved user interaction in electronic devices

Country Status (2)

Country Link
GB (1) GB2487425A (en)
WO (1) WO2012098361A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103809894A (en) * 2012-11-15 2014-05-21 华为终端有限公司 Gesture recognition method and electronic equipment
CN107690619A (en) * 2015-06-05 2018-02-13 苹果公司 For handling the apparatus and method of touch input on the multiple regions of touch sensitive surface

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108279834A (en) * 2014-09-29 2018-07-13 联想(北京)有限公司 A kind of control method and device
KR102411283B1 (en) * 2017-08-23 2022-06-21 삼성전자주식회사 Method for determining input detection region corresponding to user interface and electronic device thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5877766A (en) * 1997-08-15 1999-03-02 International Business Machines Corporation Multi-node user interface component and method thereof for use in accessing a plurality of linked records
US6295049B1 (en) * 1999-03-03 2001-09-25 Richard T. Minner Computer system utilizing graphical user interface with hysteresis to inhibit accidental selection of a region due to unintended cursor motion and method
WO2009137419A2 (en) 2008-05-06 2009-11-12 Palm, Inc. Extended touch-sensitive control area for electronic device
US20090322689A1 (en) * 2008-06-30 2009-12-31 Wah Yiu Kwong Touch input across touch-sensitive display devices

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02112013A (en) * 1988-10-21 1990-04-24 Toshiba Corp Touch panel type input device
KR20100078295A (en) * 2008-12-30 2010-07-08 삼성전자주식회사 Apparatus and method for controlling operation of portable terminal using different touch zone

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5877766A (en) * 1997-08-15 1999-03-02 International Business Machines Corporation Multi-node user interface component and method thereof for use in accessing a plurality of linked records
US6295049B1 (en) * 1999-03-03 2001-09-25 Richard T. Minner Computer system utilizing graphical user interface with hysteresis to inhibit accidental selection of a region due to unintended cursor motion and method
WO2009137419A2 (en) 2008-05-06 2009-11-12 Palm, Inc. Extended touch-sensitive control area for electronic device
US20090322689A1 (en) * 2008-06-30 2009-12-31 Wah Yiu Kwong Touch input across touch-sensitive display devices

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103809894A (en) * 2012-11-15 2014-05-21 华为终端有限公司 Gesture recognition method and electronic equipment
CN103809894B (en) * 2012-11-15 2017-06-27 华为终端有限公司 A kind of recognition methods of gesture and electronic equipment
CN107690619A (en) * 2015-06-05 2018-02-13 苹果公司 For handling the apparatus and method of touch input on the multiple regions of touch sensitive surface
US10474350B2 (en) 2015-06-05 2019-11-12 Apple Inc. Devices and methods for processing touch inputs over multiple regions of a touch-sensitive surface
CN110568965A (en) * 2015-06-05 2019-12-13 苹果公司 device and method for processing touch input on multiple areas of a touch-sensitive surface
CN110568965B (en) * 2015-06-05 2023-05-30 苹果公司 Apparatus and method for processing touch input on multiple areas of a touch-sensitive surface

Also Published As

Publication number Publication date
GB2487425A (en) 2012-07-25
GB201101133D0 (en) 2011-03-09

Similar Documents

Publication Publication Date Title
EP3336678B1 (en) Method and electronic device for preventing touch button from being false triggered
US10367765B2 (en) User terminal and method of displaying lock screen thereof
KR102109617B1 (en) Terminal including fingerprint reader and method for processing a user input through the fingerprint reader
US10551987B2 (en) Multiple screen mode in mobile terminal
US9965158B2 (en) Touch screen hover input handling
EP2684115B1 (en) Method and apparatus for providing quick access to media functions from a locked screen
EP2523070A2 (en) Input processing for character matching and predicted word matching
US20170351404A1 (en) Method and apparatus for moving icon, an apparatus and non-volatile computer storage medium
US20140152585A1 (en) Scroll jump interface for touchscreen input/output device
TWI675329B (en) Information image display method and device
KR102168648B1 (en) User terminal apparatus and control method thereof
EP3336679A1 (en) Method and terminal for preventing unintentional triggering of a touch key and storage medium
US8935638B2 (en) Non-textual user input
KR101251761B1 (en) Method for Data Transferring Between Applications and Terminal Apparatus Using the Method
WO2018068328A1 (en) Interface display method and terminal
US9383858B2 (en) Method and device for executing an operation on a mobile device
CA2884202A1 (en) Activation of an electronic device with a capacitive keyboard
KR20170004220A (en) Electronic device for displaying keypad and keypad displaying method thereof
WO2012098361A1 (en) Apparatus and method for improved user interaction in electronic devices
KR102180404B1 (en) User terminal apparatus and control method thereof
EP2899623A2 (en) Information processing apparatus, information processing method, and program
WO2012098360A2 (en) Electronic device and method with improved lock management and user interaction
KR20130042913A (en) Method, apparatus, and recording medium for processing touch process
US20150153925A1 (en) Method for operating gestures and method for calling cursor
EP3101522A1 (en) Information processing device, information processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12704100

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12704100

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 12704100

Country of ref document: EP

Kind code of ref document: A1