US20120007870A1 - Method of changing processor-generated individual characterizations presented on multiple interacting processor-controlled objects - Google Patents

Method of changing processor-generated individual characterizations presented on multiple interacting processor-controlled objects Download PDF

Info

Publication number
US20120007870A1
US20120007870A1 US13/236,516 US201113236516A US2012007870A1 US 20120007870 A1 US20120007870 A1 US 20120007870A1 US 201113236516 A US201113236516 A US 201113236516A US 2012007870 A1 US2012007870 A1 US 2012007870A1
Authority
US
United States
Prior art keywords
processor
changeable
individual
objects
characterizations
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/236,516
Inventor
Martin Owen
Original Assignee
Smalti Tech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=34566497&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=US20120007870(A1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Smalti Tech Ltd filed Critical Smalti Tech Ltd
Priority to US13/236,516 priority Critical patent/US20120007870A1/en
Publication of US20120007870A1 publication Critical patent/US20120007870A1/en
Assigned to MITILE LTD. reassignment MITILE LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SMALTI TECHNOLOGY LTD.
Assigned to EDWARDS, THOMAS JOSEPH, MR reassignment EDWARDS, THOMAS JOSEPH, MR ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MITILE LTD
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B1/00Manually or mechanically operated educational appliances using elements forming, or bearing, symbols, signs, pictures, or the like which are arranged or adapted to be arranged in one or more particular ways
    • G09B1/32Manually or mechanically operated educational appliances using elements forming, or bearing, symbols, signs, pictures, or the like which are arranged or adapted to be arranged in one or more particular ways comprising elements to be used without a special support
    • G09B1/36Manually or mechanically operated educational appliances using elements forming, or bearing, symbols, signs, pictures, or the like which are arranged or adapted to be arranged in one or more particular ways comprising elements to be used without a special support the elements being connectible by corresponding projections and recesses
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B1/00Manually or mechanically operated educational appliances using elements forming, or bearing, symbols, signs, pictures, or the like which are arranged or adapted to be arranged in one or more particular ways
    • G09B1/32Manually or mechanically operated educational appliances using elements forming, or bearing, symbols, signs, pictures, or the like which are arranged or adapted to be arranged in one or more particular ways comprising elements to be used without a special support
    • G09B1/34Manually or mechanically operated educational appliances using elements forming, or bearing, symbols, signs, pictures, or the like which are arranged or adapted to be arranged in one or more particular ways comprising elements to be used without a special support the elements to be placed loosely in adjacent relationship
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F3/00Board games; Raffle games
    • A63F3/04Geographical or like games ; Educational games
    • A63F3/0423Word games, e.g. scrabble
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F9/00Games not otherwise provided for
    • A63F9/28Chain-reaction games with toppling pieces; Dispensers or positioning devices therefor

Definitions

  • the present invention generally comprises a method of using processor controlled objects including controllably presenting changeable, processor-generated individual characterizations presented on each of a plurality of processor-controlled objects to present changeable individual characterizations to a user.
  • a user manipulating the objects can cause, over time, designated objects to inherit characterizations and properties from other interacting objects.
  • Interactions further generate sensory responses in the response generator of a specific object or otherwise in a response generator associated with another similar object based on proximity, relative position and the individual characterization presented on and by those interacting objects at the time of interaction.
  • a set of objects has extended interactive capabilities since each object is capable of dynamically taking on different characterizations arising from a meaningful combination of properties from different conjoined objects.
  • a method of controllably presenting changeable, processor-generated individual characterizations presented on each of a plurality of objects that selectively interact includes generating and displaying under powered processor-control first visual display material on a first movable object.
  • the first visual display material has a first changeable individual characterization having a first property.
  • the method includes sensing proximity and relative position of second visual display material generated and displayed under powered processor-control on a second movable object separate to the first movable object.
  • the second object is brought into processor-resolvable interacting proximity with the first moveable object by manipulation of one or more of the first movable object and the second movable object.
  • the method also includes under processor-control, selectively and autonomously changing with time the individual characterizations on at least one of the first and second objects such that the first property and the second property change to allow different processor-resolvable interactions to take place between the first and second objects, which different processor-resolvable interactions give rise to new user-perceivable sensory responses.
  • the invention consists of a manually manipulable device adapted to present an individual characterisation to a user comprising a processor, a power source, a communications unit, a response generator, and a proximity sensor adapted to sense the close proximity of a similar device, such that a user can manipulate the device and generate a sensory response in said response generator or a response generator of a similar device, in accordance with the proximity of one or more similar devices.
  • the invention consists of a set of two or more manually manipulable devices, each adapted to present an individual characterisation to a user and to be locatable relative to other such devices in multiple different arrangements, wherein each device comprises a processor, a power source, a response generator, and a communications unit, such that the devices generate a sensory response through said response generators in accordance with the arrangement of the devices selected by a user.
  • the sensory response comprises an audio response which may be generated by one or more devices.
  • each device incorporates an audio generator to provide an audio response.
  • the sensory response may instead, or in addition, comprise a visual response, which may be generated by one or more devices.
  • a device according to the invention is preferably a fully programmable, multifunctional device which can be adapted for use as a learning aid in relation to language, mathematics or music or other subjects.
  • a device can be readily adapted to be used in the manner of known multi-component, educational apparatus such as Cuisinaire rods (used to teach arithmetic), dominoes and jigsaws, each component (rod, domino or jigsaw piece) being embodied in the form of a device according to the invention, which is then able to respond visually or audibly to enhance the experience of the user of the apparatus.
  • the communications unit incorporated in the device is adapted to communicate with similar devices with which it is used to co-ordinate the sensory response appropriate to an array of multiple devices.
  • Each device communicates relevant information about itself corresponding to its characterisation and may be a simple identity code.
  • the sensory response is made evident through one or more of the devices, and could include a separate response generator.
  • Communication of a sensory response to any device preferably occurs via the communications unit.
  • the communications unit is a wireless device, that may be implemented using mobile telephone technology or the like.
  • Each device is preferably provided with a proximity sensor, or multiple proximity sensors, adapted to sense the proximity of a similar device in any one of multiple adjacent positions, for example, adjacent to each of multiple edges of the device.
  • Each device is preferably further adapted to identify an adjacent device and to communicate information of both the identity and position of an adjacent device to other devices or to the central control unit via said communication unit so that an appropriate response can be generated.
  • the proximity sensor may comprise a magnetic or an electrical device, and may require physical contact between adjacent devices to be operational.
  • a manually manipulable device according to the invention is constructed with a robust outer casing suitable for handling by a child aged 3 or older.
  • a manually manipulable device has registration features, such as protrusions and indents, in its outer surface that allow the device to be placed in registration with other such devices.
  • the registration features provide a visual guide during the registration process.
  • the registration features may interlock adjacently located manually manipulable devices according to the invention.
  • a manually manipulable device according to the invention is arranged to provide an indication when registration with another such device is achieved. The indication may be audible or visible in nature.
  • Adjacent contacting edges of devices may be adapted to fit together or interlock only when correctly orientated so that both display said visual display material the same way up (i.e. top to bottom).
  • a rectangularly shaped device may be adapted to be orientated with a similar device adjacent to each of its four side edges, and the proximity sensor is then adapted to sense each adjacent device.
  • the devices are used in conjunction with a board, tray or base on which they are placed and which is capable of identifying the location and identity of each device and communicating this to a central control unit or one or more of the devices so that they can generate the sensory response.
  • the board itself may consist of a screen which is able to generate a display appropriate for the particular application and/or to generate the sensory response.
  • the individual devices may not need to incorporate the proximity sensor because of the location sensing ability of the board.
  • a device may also incorporate a camera that allows an image to be captured, this image being used as said visual display material on a visual display unit also incorporated in the device, or the image can be used in a visual sensory response.
  • a device according to the invention may also incorporate a microphone to allow sound to be captured and used in an audio sensory response.
  • an audio sensory response such as incorporated in any of the embodiments described above, may take the form of a directional or stereo/audio response by arranging that two or more devices are controlled simultaneously or sequentially to generate appropriate sounds.
  • each device may be achieved by any of a number of different methods including connection to memory media such as smart cards or memory sticks; via a personal computer or hand-held computing device; or via said communications unit.
  • each device may make use of the communications unit to receive information from a television broadcast so that the device is adapted for use in conjunction with a television programme being broadcast.
  • a device is preferably further adapted so that it incorporates a user sensor sensitive to touch and/or movement so that it can trigger a characterisation output when handled by a user.
  • the characterisation output may comprise a visual or audio output or both.
  • Specific technologies that can be used in embodiments of the invention include networked distributed intelligent small computers known as Specks or Motes; micro-electromechanical-systems MEMs, especially for audio components and sensors; and ZigBee radio or similar communications technology.
  • a manually manipulable device is, from one aspect, a computing unit and as such can be designed to be a thin client in a client-server relationship with some other entity.
  • a manually manipulable device comprises a 32 Bit RISC (or better) CPU, memory, a graphics processor, an audio processor, a communications processor, internal data storage, a rechargeable power source and a touch-sensitive audio-visual display unit.
  • the CPU is preferably capable of processing 200 Million Instructions Per Second (MIPS) or better.
  • the CPU can preferably address 16 Mb (or better) of Random Access Memory.
  • the graphics processor and visual display will preferably be capable of rendering screen resolutions of 160.times.160 pixels (or better) in 8 bit colour (or better). Other versions will be able to process full motion video at 12.5 frames per second (or better) with 16 bit colour (or better) synchronised to audio.
  • the audio processor will preferably be capable of playback of 4 bit, 4 kHz mono audio (or better) and polyphonic tones. Enhanced versions will feature audio recording capability.
  • the internal storage may be provided by Secure Digital (SD) cards, MultiMedia Cards (MMC) or a hard disc arrangement.
  • the communications processor will preferably include support for industry standard wireless protocols including Bluetooth and in future will support other emergent protocols including IEEE 802.15.4 and other near field communication protocols. It is presently preferred that a manually manipulable device according to the invention will have a real time operating system (RTOS).
  • RTOS real time operating system
  • Video apparatus could for example involve the use of screens 5 cm.times.5 cm, but 8 cm.times.8 cm might also be acceptable.
  • the screens could for example comprise thin film transistor TFT screens with an active matrix 2.5′′ (4:3), a resolution 880.times.228 RGB delta, pixel size 56.5.times.164 HM, fully integrated single-phase analogue display drivers, signal input voltage 3V, driver frequency 3 MHz, driver power consumption 15 MW.
  • the user sensor may also sense manipulation of the device by a user indicative of a positioning movement of the device requiring an assessment of its proximity relative to similar devices and the need to generate a sensory response corresponding to one of said arrangements of devices.
  • each of said manually manipulable devices incorporates a visual display unit to display visual display material, and two or more of said devices are adapted to be arranged in a row so that said visual display material “reads” in a meaningful manner along said row.
  • a similar device is locatable adjacent to one side of said row of devices, and thereby triggers a change in the visual display material on said similar device so that it matches that of said row of devices.
  • said similar device can be located below said row of devices to acquire a combination of characters from the row above it. This device displaying said combination of characters can then be re-used in a further row of devices to create a new combination of characters.
  • Each device may have an ON/OFF switch to allow it to be reset to a start up condition, for example, displaying initial pre-programmed visual display material.
  • FIG. 1 illustrates the external physical constitution of an interactive block
  • FIG. 3 illustrates how blocks of the kind illustrated in FIG. 1 can be connected in registration with one another;
  • FIG. 4 illustrates how blocks of the kind shown in FIG. 1 can be used in a learning activity
  • FIG. 5 illustrates schematically an interactive block
  • FIG. 6 illustrates schematically a tray or board which can interact with blocks of the kind shown in FIG. 5 .
  • One embodiment of the invention consists of a set of blocks, say 12 blocks, each being rectangular in shape and adapted to be positioned edge-to-edge with other blocks on either side (referred to as left-hand and right-hand edge) in the manner of a row, and top edge and bottom edge in the manner of a column.
  • Each block incorporates a display screen over most of its front or upper surface, which forms part of an electronic visual display unit capable of displaying visual display material according to display data derived from a database.
  • the visual display material consists of a lower case letter of the alphabet which is displayed on the screen when the block is first activated.
  • Each block may incorporate a switch that allows it to be activated to deactivated, and operation of the switch initiates a start-up condition in which a pre-programmed letter is displayed.
  • Programming of the blocks may be such that different combinations in a row can spell out fifteen to twenty different words appropriate for teaching a young child to read.
  • Each block incorporates a means of displaying its orientation as far as top and bottom is concerned, which may invoke the shape of the block or an indicator displayed in the display screen.
  • Each block further incorporates a proximity sensor or sensor adapted to allow it so sense the proximity of another block aligned edge-to-edge with it, preferably involving contact between said adjacent edges, either at the left hand edge or right-hand edge or top edge or bottom edge.
  • the proximity sensor, or other ID sensor means independent of it, is adapted to sense the identity of each adjacent block.
  • Each block further incorporates a touch and/or movement sensor.
  • Each block further incorporates a wireless communications unit through which it can communicate with another block to transmit information relating to its own identity and visual display material and the identity and location of adjacent blocks and to receive information causing the visual display unit to change the visual display unit material.
  • Each block preferably further incorporates an audio generator which is adapted to produce an audio response in accordance with internal programming information received via the wireless communications unit.
  • a block with the communications, visual display and audio generator capability described above can be readily implemented using mobile telephone technology.
  • Proximity sensors, ID sensors and touch and movement sensors can also be readily implemented using known technology.
  • each block has its own power supply and incorporates a processor or processors which provide the required functionality.
  • a set of blocks is adapted to be sufficient in itself to provide the functionality described below with the processors operating in accordance with pre-programmed instructions and the inputs from the sensors of each so as to produce visual and audio responses in the blocks.
  • FIGS. 1 , 2 and 5 The constitution of an example one of the blocks is shown in FIGS. 1 , 2 and 5 .
  • the internal construction of a block is shown conceptually in FIG. 2 and in block diagram form in FIG. 5 .
  • FIG. 3 illustrates how blocks of this kind can be placed in registration with one another both vertically and horizontally.
  • FIG. 5 illustrates the main components of a block. It will, of course, be apparent to the skilled person that this is a high level diagram illustrating only key components of the block.
  • a block 500 comprises a processor 510 , a memory 512 , an RF transceiver 514 , a screen 516 , a speaker 518 , a magnetic switch 520 , a touch sensor 522 , a movement sensor 524 , a docking port 526 and a battery 528 .
  • the RF transceiver 514 enables the block 500 to communicate wirelessly with other, at least similar, blocks in the vicinity.
  • the screen 516 and the speaker 518 allow visual and audio information to be presented to a user of the block 500 .
  • the processor 510 processes, with the aid memory 512 , information received from the RF transceiver 514 , the switch 520 , the touch sensor 522 , the movement sensor 524 and the docking port 526 to cause, as appropriate, the RF transceiver 514 to communicate with other blocks and/or cause the screen 516 and/or the speaker 518 to present information to a user of the block 500 .
  • FIG. 6 shows a tray 600 for use with blocks, e.g. 610 to 616 of the kind described above with respect to FIGS. 1 , 2 and 5 .
  • the board 600 comprises a detector 618 for determining the location and identity of blocks placed on the board.
  • the board 600 also includes a charger for recharging the batteries of blocks that are placed on the board.
  • the board also includes a screen 622 and is configured to present information to a user via the screen in response to interactions of the user with blocks on the board.
  • Sam is four and a half. She's just started in her reception year at school where she's learning to read and write. Her parents are keen to help her learn at home and buy her a set of blocks with some preloaded reading software appropriate for her age.
  • Each is displaying a different lower case letter.
  • one block could say ‘Try spelling a word, how about cat’.
  • pressing on a block could say, ‘c sounds like /c/. /c/ is for cat. Move the blocks together to spell cat?’
  • the blocks prompt the child what to do next? For example, Now you can copy the word you've made onto its own block, by placing one below. Or you can try and spell another word.'
  • each block is individually responsive to touch or movement and reacts audibly and visually depending upon what it displays.
  • each block is responsive to both truck and movement separately, then each can have a secondary response, such as giving an example of use.
  • a letter is displayed, e.g. “c”
  • the block sounds the letter as in it is said in the alphabet and phonetically. For example, ‘C. C sounds like /c/ for cat’.
  • An animation may play on the screen relating to the letter and the example given.
  • a secondary response might suggest what the user can do next? For example, ‘Can you spell Cat?’
  • the block sounds the phonetic letters for the word. For example, ‘/c/, /a/, /t/ spells cat’.
  • An animation relating to the word plays on the screen.
  • a secondary response might suggest the spelling of another word from the available letters if this is possible.
  • a phonetic sound e.g. “ch”
  • the block sounds the combined phonetic sound ‘/ch/ as in lunch’.
  • the screen displays an animation of some food being eaten.
  • the blocks sound the individual letters followed by the word. For example, ‘/c/, /a/, /t/, spells cat. Well done, you've spelt cat’.
  • the displays play a short animation. In this example a picture of a cat running between the two blocks. This happens whenever one of the joined blocks are pressed.
  • Animation and sound will only be available for some of the words that can be created using the blocks, as stated in a related response database held in one or each block or a central control unit.
  • the lower block inherits the property of the upper block. Placing multiple blocks above or below will also cause a reaction between the blocks. For example, if the user places one block above another, and the top block shows ‘/b/’ and the lower block shows ‘/b/’, the lower block will also become a ‘/b/’.
  • FIG. 4 An example of use of a set of alphabet blocks operating according to the above principles is illustrated in FIG. 4 , in a number of steps 1-6.
  • Blocks are taken out of the box and arranged on the floor.
  • ‘/m/’ is put in front of ‘at’ to make ‘mat’.
  • the individual ‘/a/’ and ‘/t/’ blocks are still joined to the top of ‘at’, but have no direct effect to the ‘/m/’ as they are not directly above but to one side.
  • ‘/u/’ is put below the ‘/m/’ of ‘mat’ and ‘mat’ is copied onto the single block, which is then removed (not illustrated).
  • the invention is applicable to diverse areas, which include but are not limited to, play, entertainment, adornment and decoration, environment, industry and learning (of, for example, languages, mathematics and musical skills/knowledge).
  • Play applications may include a variety of playful games using the blocks and, optionally, a tray of the type mentioned in the introduction. These include new games as well as enhancements of typical existing board and card games with additional features (by virtue of the fact the pieces (blocks) can change their image and emit sounds) and the board (interactive base) can also change its image. Further, new forms of toy such as farmyards and zoos can be created and become elements of animated stories.

Abstract

Processor-controlled objects, such as inter-communicating processor-controlled blocks, are adapted to present changeable individual characterizations to a user. A user manipulating the objects can cause, over time, a designated object to inherit characterizations and properties from other interacting objects to permit scalability in a set of such objects. The communication of individual characterization between interacting objects allows generation of sensory responses (in a response generator of a specific object or otherwise in a response generator associated with at least one other similar objects) based on proximity, relative position and the individual characterization presented on and by those interacting objects at the time of interaction. In this way, a set of objects has vastly extended interactive capabilities since each object is capable of dynamically taking on different characterizations arising from a meaningful combination of properties from different conjoined objects.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 11/142,955, filed on Jun. 2, 2005, entitled “MANIPULABLE INTERACTIVE DEVICES,” which is incorporated herein by reference in its entirety for all purposes.
  • FIELD OF THE INVENTION
  • This invention generally relates to processor controlled objects, such as inter-communicating processor-controlled blocks. More particularly, the present invention is directed to a controllably presenting changeable, processor-generated individual characterizations presented on each of a plurality of objects that selectively interact.
  • BACKGROUND OF THE INVENTION
  • This invention relates to a manually manipulable device, especially a device which is adapted to interact with a similar device or devices according to their relative locations so as to produce a sensory response for a user, such a device being especially suitable for educational or entertainment purposes.
  • SUMMARY OF THE INVENTION
  • The present invention generally comprises a method of using processor controlled objects including controllably presenting changeable, processor-generated individual characterizations presented on each of a plurality of processor-controlled objects to present changeable individual characterizations to a user. A user manipulating the objects can cause, over time, designated objects to inherit characterizations and properties from other interacting objects. Interactions further generate sensory responses in the response generator of a specific object or otherwise in a response generator associated with another similar object based on proximity, relative position and the individual characterization presented on and by those interacting objects at the time of interaction. A set of objects has extended interactive capabilities since each object is capable of dynamically taking on different characterizations arising from a meaningful combination of properties from different conjoined objects.
  • In one embodiment of a method of controllably presenting changeable, processor-generated individual characterizations presented on each of a plurality of objects that selectively interact includes generating and displaying under powered processor-control first visual display material on a first movable object. The first visual display material has a first changeable individual characterization having a first property. The method includes sensing proximity and relative position of second visual display material generated and displayed under powered processor-control on a second movable object separate to the first movable object. The second object is brought into processor-resolvable interacting proximity with the first moveable object by manipulation of one or more of the first movable object and the second movable object. The second visual display material has a second changeable individual characterization independent to the first individual characterization and the second changeable individual characterization has a second property. In response to processor-resolved interaction between the first and second changeable individual characterizations arising from sensed proximity and relative position of the first and second objects, the method includes generating a user-perceivable sensory response from a response generator, wherein the user-perceivable sensory response is dependent upon the sensed relative positions of the first visual display material to the second visual display material and is indicative of a contextual relationship that arises between said first property of the first changeable individual characterizations and the second property of the second changeable individual characterizations. The method also includes under processor-control, selectively and autonomously changing with time the individual characterizations on at least one of the first and second objects such that the first property and the second property change to allow different processor-resolvable interactions to take place between the first and second objects, which different processor-resolvable interactions give rise to new user-perceivable sensory responses.
  • According to a first aspect, the invention consists of a manually manipulable device adapted to present an individual characterisation to a user comprising a processor, a power source, a communications unit, a response generator, and a proximity sensor adapted to sense the close proximity of a similar device, such that a user can manipulate the device and generate a sensory response in said response generator or a response generator of a similar device, in accordance with the proximity of one or more similar devices.
  • According to still another aspect, the invention consists of a set of two or more manually manipulable devices, each adapted to present an individual characterisation to a user and to be locatable relative to other such devices in multiple different arrangements, wherein each device comprises a processor, a power source, a response generator, and a communications unit, such that the devices generate a sensory response through said response generators in accordance with the arrangement of the devices selected by a user.
  • The characterisation may comprise visual display material or audio output material, and will vary depending on the particular application or purpose of the device or devices. For example, visual display material may comprise a letter or group of letters (e.g. phoneme) or word or words, and the sensory response may comprise speech corresponding to a word or phrase or sentence spelt out by the letters or words. In another application, visual display material may comprise a number or mathematical symbol, and the sensory response may comprise speech relating to mathematical properties of the numbers on the devices. In yet another application, visual display material may comprise a musical symbol and the sensory response may be an audio musical response. In an example in which the characterisation comprises audio output material, this may comprise the audio equivalent of any of the examples of visual display material given above.
  • In some implementations of the invention, the sensory response comprises an audio response which may be generated by one or more devices. Thus, each device incorporates an audio generator to provide an audio response. However, in other examples of the invention, the sensory response may instead, or in addition, comprise a visual response, which may be generated by one or more devices.
  • In a preferred embodiment of the invention, each device incorporates a visual display unit which displays visual display material and/or is able to generate a visual sensory response, which may be a static or animated visual display. Preferably, each device is programmable to allow the visual display material and the sensory response to be programmed to suit different applications, for example, to accommodate letters or words or numbers or musical symbols as described above, or any other visual display material, and to generate corresponding audio or visual responses.
  • Therefore, a device according to the invention is preferably a fully programmable, multifunctional device which can be adapted for use as a learning aid in relation to language, mathematics or music or other subjects. Such a device can be readily adapted to be used in the manner of known multi-component, educational apparatus such as Cuisinaire rods (used to teach arithmetic), dominoes and jigsaws, each component (rod, domino or jigsaw piece) being embodied in the form of a device according to the invention, which is then able to respond visually or audibly to enhance the experience of the user of the apparatus.
  • The communications unit incorporated in the device is adapted to communicate with similar devices with which it is used to co-ordinate the sensory response appropriate to an array of multiple devices. Each device communicates relevant information about itself corresponding to its characterisation and may be a simple identity code. The sensory response is made evident through one or more of the devices, and could include a separate response generator.
  • Communication of a sensory response to any device preferably occurs via the communications unit.
  • Preferably, the communications unit is a wireless device, that may be implemented using mobile telephone technology or the like.
  • Each device is preferably provided with a proximity sensor, or multiple proximity sensors, adapted to sense the proximity of a similar device in any one of multiple adjacent positions, for example, adjacent to each of multiple edges of the device. Each device is preferably further adapted to identify an adjacent device and to communicate information of both the identity and position of an adjacent device to other devices or to the central control unit via said communication unit so that an appropriate response can be generated.
  • The proximity sensor may comprise a magnetic or an electrical device, and may require physical contact between adjacent devices to be operational.
  • Preferably, a manually manipulable device according to the invention is constructed with a robust outer casing suitable for handling by a child aged 3 or older.
  • Preferably, a manually manipulable device according to the invention has registration features, such as protrusions and indents, in its outer surface that allow the device to be placed in registration with other such devices. Preferably, the registration features provide a visual guide during the registration process. The registration features may interlock adjacently located manually manipulable devices according to the invention. In one embodiment, a manually manipulable device according to the invention is arranged to provide an indication when registration with another such device is achieved. The indication may be audible or visible in nature.
  • Adjacent contacting edges of devices may be adapted to fit together or interlock only when correctly orientated so that both display said visual display material the same way up (i.e. top to bottom). A rectangularly shaped device may be adapted to be orientated with a similar device adjacent to each of its four side edges, and the proximity sensor is then adapted to sense each adjacent device.
  • In an alternative embodiment of the invention, the devices are used in conjunction with a board, tray or base on which they are placed and which is capable of identifying the location and identity of each device and communicating this to a central control unit or one or more of the devices so that they can generate the sensory response. The board itself may consist of a screen which is able to generate a display appropriate for the particular application and/or to generate the sensory response. In this alternative embodiment of the invention, the individual devices may not need to incorporate the proximity sensor because of the location sensing ability of the board.
  • In the above alternative embodiment, the board may be adapted so that it can recharge individual devices when placed in contact with it. Furthermore, this recharging feature may be provided in a board not having the device location capability.
  • A device according to the invention may also incorporate a camera that allows an image to be captured, this image being used as said visual display material on a visual display unit also incorporated in the device, or the image can be used in a visual sensory response.
  • A device according to the invention may also incorporate a microphone to allow sound to be captured and used in an audio sensory response.
  • A device according to the invention may also incorporate data input means in the form of a handwriting recognition device to input words, letters, symbols or numbers for use in characterisation of the device or programming a sensory response to be produced by the device.
  • It will be appreciated that an audio sensory response, such as incorporated in any of the embodiments described above, may take the form of a directional or stereo/audio response by arranging that two or more devices are controlled simultaneously or sequentially to generate appropriate sounds.
  • Programming of each device may be achieved by any of a number of different methods including connection to memory media such as smart cards or memory sticks; via a personal computer or hand-held computing device; or via said communications unit. In one example, each device may make use of the communications unit to receive information from a television broadcast so that the device is adapted for use in conjunction with a television programme being broadcast.
  • A device according to the invention is preferably further adapted so that it incorporates a user sensor sensitive to touch and/or movement so that it can trigger a characterisation output when handled by a user. The characterisation output may comprise a visual or audio output or both.
  • Specific technologies that can be used in embodiments of the invention include networked distributed intelligent small computers known as Specks or Motes; micro-electromechanical-systems MEMs, especially for audio components and sensors; and ZigBee radio or similar communications technology.
  • A manually manipulable device according to the invention is, from one aspect, a computing unit and as such can be designed to be a thin client in a client-server relationship with some other entity.
  • In one embodiment, a manually manipulable device according to the invention comprises a 32 Bit RISC (or better) CPU, memory, a graphics processor, an audio processor, a communications processor, internal data storage, a rechargeable power source and a touch-sensitive audio-visual display unit. The CPU is preferably capable of processing 200 Million Instructions Per Second (MIPS) or better. The CPU can preferably address 16 Mb (or better) of Random Access Memory. The graphics processor and visual display will preferably be capable of rendering screen resolutions of 160.times.160 pixels (or better) in 8 bit colour (or better). Other versions will be able to process full motion video at 12.5 frames per second (or better) with 16 bit colour (or better) synchronised to audio. Other versions will have live video or still image capture via a built-in camera. The audio processor will preferably be capable of playback of 4 bit, 4 kHz mono audio (or better) and polyphonic tones. Enhanced versions will feature audio recording capability. The internal storage may be provided by Secure Digital (SD) cards, MultiMedia Cards (MMC) or a hard disc arrangement. The communications processor will preferably include support for industry standard wireless protocols including Bluetooth and in future will support other emergent protocols including IEEE 802.15.4 and other near field communication protocols. It is presently preferred that a manually manipulable device according to the invention will have a real time operating system (RTOS).
  • Video apparatus could for example involve the use of screens 5 cm.times.5 cm, but 8 cm.times.8 cm might also be acceptable. The screens could for example comprise thin film transistor TFT screens with an active matrix 2.5″ (4:3), a resolution 880.times.228 RGB delta, pixel size 56.5.times.164 HM, fully integrated single-phase analogue display drivers, signal input voltage 3V, driver frequency 3 MHz, driver power consumption 15 MW.
  • The power source is preferably a rechargeable battery and might comprise a photovoltaic generator.
  • The user sensor may also sense manipulation of the device by a user indicative of a positioning movement of the device requiring an assessment of its proximity relative to similar devices and the need to generate a sensory response corresponding to one of said arrangements of devices.
  • According to a further feature of the invention, each of said manually manipulable devices incorporates a visual display unit to display visual display material, and two or more of said devices are adapted to be arranged in a row so that said visual display material “reads” in a meaningful manner along said row. A similar device is locatable adjacent to one side of said row of devices, and thereby triggers a change in the visual display material on said similar device so that it matches that of said row of devices. For example, said similar device can be located below said row of devices to acquire a combination of characters from the row above it. This device displaying said combination of characters can then be re-used in a further row of devices to create a new combination of characters.
  • Each device may have an ON/OFF switch to allow it to be reset to a start up condition, for example, displaying initial pre-programmed visual display material.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • By way of example only, certain embodiments of the invention will now be described with reference to the accompanying drawings, in which:
  • FIG. 1 illustrates the external physical constitution of an interactive block;
  • FIG. 2 illustrates, conceptually, the internal constitution of the interactive block of FIG. 1,
  • FIG. 3 illustrates how blocks of the kind illustrated in FIG. 1 can be connected in registration with one another;
  • FIG. 4 illustrates how blocks of the kind shown in FIG. 1 can be used in a learning activity;
  • FIG. 5 illustrates schematically an interactive block; and
  • FIG. 6 illustrates schematically a tray or board which can interact with blocks of the kind shown in FIG. 5.
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
  • One embodiment of the invention consists of a set of blocks, say 12 blocks, each being rectangular in shape and adapted to be positioned edge-to-edge with other blocks on either side (referred to as left-hand and right-hand edge) in the manner of a row, and top edge and bottom edge in the manner of a column. Each block incorporates a display screen over most of its front or upper surface, which forms part of an electronic visual display unit capable of displaying visual display material according to display data derived from a database. In this embodiment, the visual display material consists of a lower case letter of the alphabet which is displayed on the screen when the block is first activated. Each block may incorporate a switch that allows it to be activated to deactivated, and operation of the switch initiates a start-up condition in which a pre-programmed letter is displayed. Programming of the blocks may be such that different combinations in a row can spell out fifteen to twenty different words appropriate for teaching a young child to read.
  • Each block incorporates a means of displaying its orientation as far as top and bottom is concerned, which may invoke the shape of the block or an indicator displayed in the display screen.
  • Each block further incorporates a proximity sensor or sensor adapted to allow it so sense the proximity of another block aligned edge-to-edge with it, preferably involving contact between said adjacent edges, either at the left hand edge or right-hand edge or top edge or bottom edge. The proximity sensor, or other ID sensor means independent of it, is adapted to sense the identity of each adjacent block.
  • Each block further incorporates a touch and/or movement sensor.
  • Each block further incorporates a wireless communications unit through which it can communicate with another block to transmit information relating to its own identity and visual display material and the identity and location of adjacent blocks and to receive information causing the visual display unit to change the visual display unit material.
  • Each block preferably further incorporates an audio generator which is adapted to produce an audio response in accordance with internal programming information received via the wireless communications unit.
  • It will be appreciated that a block with the communications, visual display and audio generator capability described above can be readily implemented using mobile telephone technology. Proximity sensors, ID sensors and touch and movement sensors can also be readily implemented using known technology. It will be appreciated that each block has its own power supply and incorporates a processor or processors which provide the required functionality.
  • A set of blocks is adapted to be sufficient in itself to provide the functionality described below with the processors operating in accordance with pre-programmed instructions and the inputs from the sensors of each so as to produce visual and audio responses in the blocks.
  • The constitution of an example one of the blocks is shown in FIGS. 1, 2 and 5. The internal construction of a block is shown conceptually in FIG. 2 and in block diagram form in FIG. 5. FIG. 3 illustrates how blocks of this kind can be placed in registration with one another both vertically and horizontally.
  • FIG. 5 illustrates the main components of a block. It will, of course, be apparent to the skilled person that this is a high level diagram illustrating only key components of the block. As shown in FIG. 5, a block 500 comprises a processor 510, a memory 512, an RF transceiver 514, a screen 516, a speaker 518, a magnetic switch 520, a touch sensor 522, a movement sensor 524, a docking port 526 and a battery 528. The RF transceiver 514 enables the block 500 to communicate wirelessly with other, at least similar, blocks in the vicinity. The screen 516 and the speaker 518 allow visual and audio information to be presented to a user of the block 500. The magnetic switch 520 is activated by the proximity of another, at least similar, block. The touch sensor 522 is provided at the exterior of the block 500 to detect a user touching at least that area of the block 500. The movement switch 524 detects movement of the block 500 by a user. The docking port 526 is for receiving a memory card to load software/data into the block 500. The block 500 also includes a battery 528 that provides power to allow the various devices within the block to operate. The processor 510 processes, with the aid memory 512, information received from the RF transceiver 514, the switch 520, the touch sensor 522, the movement sensor 524 and the docking port 526 to cause, as appropriate, the RF transceiver 514 to communicate with other blocks and/or cause the screen 516 and/or the speaker 518 to present information to a user of the block 500.
  • FIG. 6 shows a tray 600 for use with blocks, e.g. 610 to 616 of the kind described above with respect to FIGS. 1, 2 and 5. The board 600 comprises a detector 618 for determining the location and identity of blocks placed on the board. The board 600 also includes a charger for recharging the batteries of blocks that are placed on the board. The board also includes a screen 622 and is configured to present information to a user via the screen in response to interactions of the user with blocks on the board.
  • Examples of how the set of blocks can be used as alphabet blocks will now be now described.
  • Sam is four and a half. She's just started in her reception year at school where she's learning to read and write. Her parents are keen to help her learn at home and buy her a set of blocks with some preloaded reading software appropriate for her age.
  • Sam opens the box and takes out the blocks. Her parents are standing over, curious about how they work.
  • Each is displaying a different lower case letter.
  • She goes to pick one up and the unit sounds the letter it is displaying. For example, ‘/c/’. Moving each of the blocks she realises they all do the same.
  • In an alternative option, if there's 15 seconds inactivity, one block could say ‘Try spelling a word, how about cat’. Alternatively pressing on a block could say, ‘c sounds like /c/. /c/ is for cat. Move the blocks together to spell cat?’
  • Sam puts two of the blocks next to each other. Starting with the one on the left, the blocks read in turn the letters they are displaying. For example ‘/d/, /o/’. They then read the combined sound. For this example the blocks would say ‘do’.
  • When she puts three ‘random’ letters together ('/c/, /f/, /g/'), they make no sound.
  • She plays around with some different combinations until a word is spelt. For example, ‘/c/, /a/, /t/. You've spelt cat. Well done.’ At this point a cat leaps onto the screen, runs around and miaows.
  • In an alternative option, the blocks prompt the child what to do next? For example, Now you can copy the word you've made onto its own block, by placing one below. Or you can try and spell another word.'
  • When Sam puts another block below the word she has spelt, the word jumps down onto that single block. It's says ‘cat’ when she presses it.
  • The three blocks that originally spelt the word are now free to be used for another word.
  • As described above, each block is individually responsive to touch or movement and reacts audibly and visually depending upon what it displays.
  • If each block is responsive to both truck and movement separately, then each can have a secondary response, such as giving an example of use.
  • If a letter is displayed, e.g. “c”, the block sounds the letter as in it is said in the alphabet and phonetically. For example, ‘C. C sounds like /c/ for cat’. An animation may play on the screen relating to the letter and the example given. A secondary response might suggest what the user can do next? For example, ‘Can you spell Cat?’
  • If a word is displayed e.g. “cat”, the block sounds the phonetic letters for the word. For example, ‘/c/, /a/, /t/ spells cat’. An animation relating to the word plays on the screen. A secondary response might suggest the spelling of another word from the available letters if this is possible.
  • If a phonetic sound is displayed e.g. “ch”, the block sounds the combined phonetic sound ‘/ch/ as in lunch’. The screen displays an animation of some food being eaten.
  • When blocks are placed next to each other they react depending what is on each. This could be a phonetic sound e.g. ‘/ch/’, a word e.g. ‘cat’ or random letters e.g. ‘/k/, /r/, /f/’.
  • If the user places individual blocks alongside each other then they respond according to the combination of letters they display.
  • If a phonetic sound is created “ch”, the blocks sound the combined sound, ‘/ch/’. They could also give a short example of use ‘/ch/ as in lunch, yum, yum, yum’.
  • If a word is created “cat”, the blocks sound the individual letters followed by the word. For example, ‘/c/, /a/, /t/, spells cat. Well done, you've spelt cat’. The displays play a short animation. In this example a picture of a cat running between the two blocks. This happens whenever one of the joined blocks are pressed.
  • If a new word is created (plural or completely new) by adding a letter or letters to a current word of phonetic sound, the response might be, for example, ‘/c/, /a/, /r/, /t/, spells cart. Are you coming for a ride?’ or ‘/c, /a/, /t/, /s/ spells cats. Here they come!’. The displays animate according to the word spelt if the word has an associated animation in database. So in the above examples, a horse and cart could drive on and off the screens, or several cats could start playing around.
  • If a random set of letters are placed next to each other. For example ‘/d/, /f/, /r/, /g/’, no sound is generated and no animation is displayed.
  • Animation and sound will only be available for some of the words that can be created using the blocks, as stated in a related response database held in one or each block or a central control unit.
  • If a user places one block adjacent the top edge of another, the lower block inherits the property of the upper block. Placing multiple blocks above or below will also cause a reaction between the blocks. For example, if the user places one block above another, and the top block shows ‘/b/’ and the lower block shows ‘/b/’, the lower block will also become a ‘/b/’.
  • A user can place a word spelt out over several blocks onto one block by placing a block below. This could also be used to join a ‘/c/’ and an ‘/h/’ on a single ‘/ch/’ block.
  • If a user has spelt a word or phonetic sound using three individual blocks, for example, ‘/c/’, ‘/a/’ and ‘/t/’ spelling ‘cat’, the user can then place a fourth block anywhere under the three letter blocks and the word “cat” moves onto a single block. However, if a user tries to copy two random letters onto a single block it will not work. For example ‘/g/’ and ‘/f/’ cannot be joined on a single ‘/gf/’ block.
  • Likewise if the user has two word blocks that don't make a third word, they cannot be copied onto a single block. For example ‘cat’ and ‘sat’ cannot be joined to make a ‘catsat’ block.
  • If a user has the word cat on a single block and wants to split it into three separate letters, they need to place three blocks below the word block. The three letters each go into their own block in right to left order below.
  • An example of use of a set of alphabet blocks operating according to the above principles is illustrated in FIG. 4, in a number of steps 1-6.
  • 1. Blocks are taken out of the box and arranged on the floor.
  • 2. User puts ‘/c/’ and ‘/h/’ together, the blocks sound ‘/ch/’. They put ‘/g/’ underneath and copy ‘/ch/’ onto it. Trying to copy ‘/t/’, ‘/m/’ onto ‘/g/’ doesn't work.
  • 3. ‘/a/’ and ‘/t/’ are joined to make ‘at’ and copied onto a single block.
  • 4. ‘/m/’ is put in front of ‘at’ to make ‘mat’. The individual ‘/a/’ and ‘/t/’ blocks are still joined to the top of ‘at’, but have no direct effect to the ‘/m/’ as they are not directly above but to one side. ‘/u/’ is put below the ‘/m/’ of ‘mat’ and ‘mat’ is copied onto the single block, which is then removed (not illustrated).
  • 5. A ‘/s/’ block is put in front of the ‘/a/’ and ‘/t/’ blocks to spell ‘sat’. As the ‘/m/’ of ‘mat’ is now below the ‘/s/’ block the word ‘sat’ is copied onto it. ‘sat’ is also copied onto the ‘at’ block. The two ‘sat’ blocks don't interact with each other as a new word or sound hasn't been created. Likewise when a ‘r’ block is placed below either of the ‘sat’ blocks nothing is copied down.
  • 6. Using the blocks here is a chain of various words that can be created following the principles described in the functional specifications.
  • The invention is applicable to diverse areas, which include but are not limited to, play, entertainment, adornment and decoration, environment, industry and learning (of, for example, languages, mathematics and musical skills/knowledge).
  • Play applications may include a variety of playful games using the blocks and, optionally, a tray of the type mentioned in the introduction. These include new games as well as enhancements of typical existing board and card games with additional features (by virtue of the fact the pieces (blocks) can change their image and emit sounds) and the board (interactive base) can also change its image. Further, new forms of toy such as farmyards and zoos can be created and become elements of animated stories.
  • In relation to adornment and decoration, in the educational context, IA blocks can be worn as badges that can enable students to role play their various functions (letters, sounds, numbers) and interact with other badge-wearing children to form words, tunes and equations. Beyond this, IA blocks have implicit emotive, aesthetic, interactive, and descriptive capabilities. Blocks in combination can be used to trigger social and artistic interactions between people or create more complex installations.
  • In environment and industrial settings variations of the devices can enable audio and visual data/systems alone or in combination (e.g. for health and safety measurement and control).

Claims (25)

1. A method of controllably presenting changeable, processor-generated individual characterizations presented on each of a plurality of objects that selectively interact, the method comprising:
generating and displaying under powered processor-control first visual display material on a first movable object, the first visual display material having a first changeable individual characterization having a first property;
sensing proximity and relative position of second visual display material generated and displayed under powered processor-control on a second movable object separate to the first movable object, the second object brought into processor-resolvable interacting proximity with the first moveable object by manipulation of one or more of the first movable object and the second movable object, the second visual display material having a second changeable individual characterization independent to the first individual characterization, the second changeable individual characterization having a second property;
in response to processor-resolved interaction between said first and second changeable individual characterizations arising from sensed proximity and relative position of the first and second objects, generating a user-perceivable sensory response from a response generator, wherein the user-perceivable sensory response is dependent upon said sensed relative positions of the first visual display material to the second visual display material and is indicative of a contextual relationship that arises between said first property of the first changeable individual characterizations and the second property of the second changeable individual characterizations; and
under processor-control, selectively and autonomously changing with time the individual characterizations on at least one of the first and second objects such that the first property and the second property change to allow different processor-resolvable interactions to take place between the first and second objects, which different processor-resolvable interactions give rise to new user-perceivable sensory responses.
2. The method of controllably presenting changeable, processor-generated individual characterizations according to claim 1, wherein the change in property and individual characterization presented by one of the plurality of objects is dependent upon the processor-resolved interaction and contextual relationship between the first changeable individual characterization presented on the first object and the second changeable individual characterization presented on the second object.
3. The method of controllably presenting changeable, processor-generated individual characterizations according to claim 1, wherein the change to the first property and/or the second property includes an auditory change.
4. The method of controllably presenting changeable, processor-generated individual characterizations according to claim 1, wherein the change to the first property and/or the second property includes a visual change.
5. The method of controllably presenting changeable, processor-generated individual characterizations according to claim 1, wherein the user-perceivable sensory response is a phonetic sound produced by combined interaction of said first and second changeable individual characterizations and their respective first and second meanings.
6. The method of controllably presenting changeable, processor-generated individual characterizations according to claim 1, wherein generating the user-perceivable sensory response further generates:
an audible representation of the first changeable individual characterization; and
an audible representation of the second changeable individual characterization.
7. The method of controllably presenting changeable, processor-generated individual characterizations according to claim 1, the method further comprising:
under processor-control, selectively changing with time said changeable individual characterizations displayed on said plurality of objects based upon:
i) nearby detection and relative position of objects; and
ii) individual characterizations presented by interacting objects at the time of their detection and interaction.
8. The method of controllably presenting changeable, processor-generated individual characterizations according to claim 1, wherein the user-perceivable sensory response is generated for only meaningful non-random interactions between said first and second changeable individual characterizations.
9. The method of controllably presenting changeable, processor-generated individual characterizations according to claim 1, wherein the user-perceivable sensory response generated from the response generator includes an audible response, a visual response or a combination of an audible and visual response.
10. The method of controllably presenting changeable, processor-generated individual characterizations according to claim 8, wherein the user-perceivable sensory response includes a related visual animation.
11. The method of controllably presenting changeable, processor-generated individual characterizations according to claim 1, wherein generating the user-perceivable sensory response includes generating at least one of:
a phoneme;
a letter, word, phrase or sentence;
speech relating to mathematical properties of a visually presented number; and
an audio musical response corresponding to a musical symbol.
12. The method of controllably presenting changeable, processor-generated individual characterizations according to claim 1, further comprising:
arranging said first and second objects in a line so that said first visual display material and said second visual display material reads in a meaningful manner along said line; and
locating a third object adjacent to one side of said line of objects, the third object having a third changeable individual characterization presented as third visual display material, the step of adjacently locating causing the third visual display material presented on said third object to change to take on the combination of said first visual display material and said second visual display material that reads in the meaningful manner along said line.
13. The method of controllably presenting changeable, processor-generated individual characterizations according to claim 1, further comprising:
locating a third object proximate to the first and second objects, the step of locating causing positioning of the third object in one of a multiplicity of different positions below the first and second objects, the third object having a third changeable individual characterization;
determining relative positions between the first, second and third objects;
in the event that said relative position between the first and second objects results in production of a non-random meaningful, interaction between at least the first and second individual characterizations that produces one of a phoneme, a word a mathematical property or a musical response, effecting a processor-controlled change to the third changeable individual characterization presented on the third object by generating a new changeable individual characterization that is presented on the third object and which new changeable individual characterization reflects:
i) the meaningful interaction taking place between the first and second individual characterizations; and
ii) the relative position between the first, second and third objects.
14. The method of controllably presenting changeable, processor-generated individual characterizations according to claim 12, wherein each object is arranged to generate and display a changeable individual characterization in the form of:
at least one letter;
a word;
a number;
a mathematical symbol; or
a musical symbol.
15. The method of controllably presenting changeable, processor-generated individual characterizations according to claim 1, further comprising:
selectively programming each of the plurality of objects to present over time a multiplicity of properties under processor-control.
16. The method of controllably presenting changeable, processor-generated individual characterizations according to claim 1, further comprising:
based on relative position between the first and second movable objects, having the first movable object inherit properties associated with at least the second changeable individual characterization of the second movable object.
17. A method of controllably presenting changeable, processor-generated individual characterizations presented on each of a plurality of objects that selectively interact, the method comprising:
generating and presenting under powered processor-control first visual display material on a processor-controlled object, the first visual display material having a first changeable individual characterization with a first property;
sensing proximity and relative position of second visual display material generated and presented under processor-control on a second object movable independently of said processor-controlled object, the second object brought into processor-resolvable interacting proximity with said processor-controlled object by manipulation of one or more of said processor-controlled object and the second object, the second visual display material having a processor-controlled second changeable individual characterization independent of the first individual characterization, the second changeable individual characterization having a second property;
sensing proximity and relative position of third visual display material generated and presented under processor-control on a third object movable independently of said processor-controlled object, the third object brought into processor-resolvable interacting proximity with said processor-controlled object by its relative manipulation with respect to the second object, the third visual display material having a processor-controlled third changeable individual characterization independent of the first and second individual characterizations, the third changeable individual characterization having a third property, the second and third properties of the second and third objects interacting with one another when located side-by-side in a line to produce a combination;
under processor control changing the first changeable individual characterization with the first property to a fourth changeable individual characterization with a fourth property such that the fourth changeable individual characterization is presented on said processor-controlled object, the fourth property different to the first property, the fourth individual characterization being the combination inherited from the second and third individual characterizations provided that the in-line side-by-side combination has a contextual relevance that is not random; and
determining new processor-resolvable interactions involving said processor-controlled object now having the fourth changeable individual characterization and generating and outputting audible and/or visual user-perceivable sensory responses from a response generator to reflect contextually relevant processor-resolvable interactions that reflect and/or involve the fourth changeable individual characterization.
18. The method of controllably presenting changeable, processor-generated individual characterizations of claim 17, further comprising:
assembling a first line of objects containing only the processor-controlled object and assembling a second line of objects contains the second and third objects but not the processor-controlled object;
determining that the first and second lines are interacting; and
determining whether the second and third changeable individual characterizations combine to provide a contextually relevant combination that is not random,
subject to the second and third changeable individual characterizations combining to provide a contextually relevant combination that is not random, causing the first changeable individual characterization to change to the fourth changeable individual characterization by inheriting the contextually relevant combination based on an edge justification determined by the processor-controlled object in the first line relative to the contextually relevant combination in the second line.
19. The method of controllably presenting changeable, processor-generated individual characterizations of claim 17, wherein the contextually relevant combination inherited as the fourth changeable individual characterization includes at least the second and third changeable individual characterizations.
20. The method of controllably presenting changeable, processor-generated individual characterizations of claim 17, further comprising:
assembling a first line of objects containing only the processor-controlled object and assembling a second line of objects containing at least the second and third objects but not the processor-controlled object;
determining that the first and second lines are interacting; and
determining whether at least the second and third changeable individual characterizations combine to provide a contextually relevant combination that is not random,
subject to the second and third changeable individual characterizations combining to provide a contextually relevant combination that is not random, causing the first changeable individual characterization to the change to the fourth changeable individual characterization by inheriting the contextually relevant combination based on an edge justification determined by the processor-controlled object in the first line relative to the contextually relevant combination in the second line.
21. A method of controllably presenting changeable, processor-generated individual characterizations presented on each of a plurality of objects that selectively interact, the method comprising:
generating and presenting under powered processor-control first visual display material on a processor-controlled object, the first visual display material having a first changeable individual characterization with a first property;
sensing proximity and relative position of second visual display material generated and presented under processor-control on a second object movable independently of said processor-controlled object, the second object brought into processor-resolvable interacting proximity with said processor-controlled object by manipulation of one or more of said processor-controlled object and the second object, the second visual display material having a processor-controlled second changeable individual characterization independent of the first individual characterization, the second changeable individual characterization having a second property;
sensing proximity and relative position of third visual display material generated and presented under processor-control on a third object movable independently of said processor-controlled object, the third object brought into processor-resolvable interacting proximity with said processor-controlled object by its relative manipulation with respect to the second object, the third visual display material having a processor-controlled third changeable individual characterization independent of the first and second individual characterizations, the third changeable individual characterization having a third property, the second and third properties of the second and third objects interacting with one another when located side-by-side in a line to produce a combination;
sensing proximity and relative position of fourth visual display material generated and presented under processor-control on a fourth object movable independently of said processor-controlled object, the fourth object brought into processor-resolvable interacting proximity with said processor-controlled object by its relative manipulation with respect to the second and third object, the fourth visual display material having a processor-controlled fourth changeable individual characterization independent of the first, second and third individual characterizations, the fourth changeable individual characterization having a fourth property, wherein the second, third and fourth properties of the second, third and fourth objects interacting with one another when located side-by-side in a line to produce a combination;
under processor control changing the first changeable individual characterization with the first property to a fifth changeable individual characterization with a fifth property such that the fifth changeable individual characterization is presented on said processor-controlled object, the fifth property different to the first property, the fifth individual characterization being the combination inherited from the second, third and fourth individual characterizations provided that the in-line side-by-side combination has a contextual relevance that is not random; and
determining new processor-resolvable interactions involving said processor-controlled object now having the fifth changeable individual characterization and generating and outputting audible and/or visual user-perceivable sensory responses from a response generator to reflect contextually relevant processor-resolvable interactions that reflect and/or involve the fifth changeable individual characterization.
22. The method of controllably presenting changeable, processor-generated individual characterizations of claim 21, further comprising:
assembling a first line of objects containing only the processor-controlled object and assembling a second line of objects containing the second, third and fourth objects but not the processor-controlled object;
determining whether the first and second lines are interacting; and
determining whether the second, third and fourth changeable individual characterizations combine to provide a contextually relevant combination that is not random,
subject to the second, third and fourth changeable individual characterizations combining to provide a contextually relevant combination that is not random, causing the first changeable individual characterization to the change to the fifth changeable individual characterization by inheriting the contextually relevant combination based on an edge justification determined by the processor-controlled object in the first line relative to the contextually relevant combination in the second line.
23. The method of controllably presenting changeable, processor-generated individual characterizations of claim 21, wherein the contextually relevant combination inherited as the fifth changeable individual characterization includes at least the second, third and fourth changeable individual characterizations.
24. The method of controllably presenting changeable, processor-generated individual characterizations of claim 21, further comprising:
inheriting and displaying a word produced from multiple similar interacting objects positioned in the line above said processor-controlled object.
25. The method of controllably presenting changeable, processor-generated individual characterizations of claim 17, further comprising:
inheriting phonetic properties produced from at least two similar interacting objects positioned in a line above said processor-controlled object and provided that interacting properties of the at two similar objects combine to provide a non-random phonetically meaningful combination.
US13/236,516 2005-03-24 2011-09-19 Method of changing processor-generated individual characterizations presented on multiple interacting processor-controlled objects Abandoned US20120007870A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/236,516 US20120007870A1 (en) 2005-03-24 2011-09-19 Method of changing processor-generated individual characterizations presented on multiple interacting processor-controlled objects

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GB0506159A GB2424510A (en) 2005-03-24 2005-03-24 Interactive blocks.
GB0506159.3 2005-03-24
US11/142,955 US8057233B2 (en) 2005-03-24 2005-06-02 Manipulable interactive devices
US13/236,516 US20120007870A1 (en) 2005-03-24 2011-09-19 Method of changing processor-generated individual characterizations presented on multiple interacting processor-controlled objects

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/142,955 Continuation US8057233B2 (en) 2005-03-24 2005-06-02 Manipulable interactive devices

Publications (1)

Publication Number Publication Date
US20120007870A1 true US20120007870A1 (en) 2012-01-12

Family

ID=34566497

Family Applications (3)

Application Number Title Priority Date Filing Date
US11/142,955 Expired - Fee Related US8057233B2 (en) 2005-03-24 2005-06-02 Manipulable interactive devices
US13/236,528 Abandoned US20120007840A1 (en) 2005-03-24 2011-09-19 Processor-controlled object
US13/236,516 Abandoned US20120007870A1 (en) 2005-03-24 2011-09-19 Method of changing processor-generated individual characterizations presented on multiple interacting processor-controlled objects

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US11/142,955 Expired - Fee Related US8057233B2 (en) 2005-03-24 2005-06-02 Manipulable interactive devices
US13/236,528 Abandoned US20120007840A1 (en) 2005-03-24 2011-09-19 Processor-controlled object

Country Status (11)

Country Link
US (3) US8057233B2 (en)
EP (2) EP2363848A3 (en)
JP (1) JP5154399B2 (en)
CN (1) CN101185108B (en)
AT (1) ATE502371T1 (en)
DE (1) DE602006020726D1 (en)
DK (1) DK1899939T3 (en)
ES (1) ES2364956T3 (en)
GB (1) GB2424510A (en)
MX (1) MX2007011816A (en)
RU (1) RU2408933C2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140104154A1 (en) * 2012-10-11 2014-04-17 Casio Computer Co., Ltd. Information output apparatus for outputting the same information as another apparatus and method
CN104064068A (en) * 2014-06-17 2014-09-24 王岳雄 Parent-children interaction learning method and device for implementing parent-children interaction learning method
US20170095729A1 (en) * 2015-10-04 2017-04-06 Shari Spiridigliozzi Electronic word game

Families Citing this family (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2441564A (en) * 2006-09-11 2008-03-12 Tts Group Ltd Apparatus for teaching synthetic phonics
US20100003651A1 (en) * 2008-07-02 2010-01-07 Med Et Al, Inc. Communication blocks and associated method of conveying information based on their arrangement
US9128661B2 (en) * 2008-07-02 2015-09-08 Med Et Al, Inc. Communication blocks having multiple-planes of detection components and associated method of conveying information based on their arrangement
US20100092928A1 (en) * 2008-09-19 2010-04-15 Mira Stulberg-Halpert System and method for teaching
US8552396B2 (en) * 2009-02-26 2013-10-08 The University Of Vermont And State Agricultural College Distributive, non-destructive real-time system and method for snowpack monitoring
ITTV20090059A1 (en) * 2009-03-26 2010-09-27 Giovine Vincenzo Di SOLID WITH LUMINOUS LED SOURCE AND ARTIFICIAL INTELLIGENCE ACTIVABLE IN ITS OPTICAL AND ACOUSTIC FUNCTIONS BY CONTACT OR PROXIMITY OR MOVEMENT AND MANAGED BY MEANS OF REMOTE CONTROL
US8742814B2 (en) 2009-07-15 2014-06-03 Yehuda Binder Sequentially operated modules
US8602833B2 (en) 2009-08-06 2013-12-10 May Patents Ltd. Puzzle with conductive path
KR101210280B1 (en) * 2009-09-02 2012-12-10 한국전자통신연구원 Sensor-based teaching aid assembly
WO2011054026A1 (en) * 2009-11-06 2011-05-12 David Webster A portable electronic device
KR101632572B1 (en) * 2009-11-25 2016-07-01 삼성전자 주식회사 Video wall display system
WO2011088670A1 (en) * 2010-01-25 2011-07-28 Locus Publishing Company Interactive information system, interactive information method, and computer readable medium thereof
US9213480B2 (en) * 2010-04-08 2015-12-15 Nokia Technologies Oy Method, apparatus and computer program product for joining the displays of multiple devices
US8639186B2 (en) * 2010-10-28 2014-01-28 Sondex Wireline Limited Telemetry conveyed by pipe utilizing specks
EP2687038B1 (en) 2011-03-18 2015-08-12 Telefonaktiebolaget L M Ericsson (PUBL) Indicating physical change in relation to an exterior of a network node module
US20140127965A1 (en) * 2011-07-29 2014-05-08 Deutsche Telekom Ag Construction toy comprising a plurality of interconnectable building elements, set of a plurality of interconnectable building elements, and method to control and/or monitor a construction toy
US11330714B2 (en) 2011-08-26 2022-05-10 Sphero, Inc. Modular electronic building systems with magnetic interconnections and methods of using the same
US9019718B2 (en) 2011-08-26 2015-04-28 Littlebits Electronics Inc. Modular electronic building systems with magnetic interconnections and methods of using the same
US9597607B2 (en) 2011-08-26 2017-03-21 Littlebits Electronics Inc. Modular electronic building systems with magnetic interconnections and methods of using the same
GB2496169B (en) * 2011-11-04 2014-03-12 Commotion Ltd Toy
US20130130589A1 (en) * 2011-11-18 2013-05-23 Jesse J. Cobb "Electronic Musical Puzzle"
DE102012004848A1 (en) * 2012-03-13 2013-09-19 Abb Technology Ag Simulation device for demonstrating or testing the functions of a control cabinet in a switchgear
US20140168094A1 (en) * 2012-12-14 2014-06-19 Robin Duncan Milne Tangible alphanumeric interaction on multi-touch digital display
CN103000056A (en) * 2012-12-21 2013-03-27 常州大学 Multimedia intelligent interactive all-in-one machine for teaching
NL2013466B1 (en) * 2014-09-12 2016-09-28 Rnd By Us B V Shape-Shifting a Configuration of Reusable Elements.
US10093488B2 (en) 2013-03-15 2018-10-09 Rnd By Us B.V. Shape-shifting a configuration of reusable elements
US9956494B2 (en) 2013-03-15 2018-05-01 Rnd By Us B.V. Element comprising sensors for detecting grab motion or grab release motion for actuating inter-element holding or releasing
US9229629B2 (en) * 2013-03-18 2016-01-05 Transcend Information, Inc. Device identification method, communicative connection method between multiple devices, and interface controlling method
US10610768B2 (en) * 2013-05-07 2020-04-07 Carder Starr Digitial multilingual word building game
CN104461173A (en) * 2013-09-23 2015-03-25 上海华师京城高新技术(集团)有限公司 Multimedia interaction all-in-one machine
CN103480159B (en) * 2013-09-30 2015-12-02 广州视源电子科技股份有限公司 toy building system
US20150099567A1 (en) * 2013-10-09 2015-04-09 Cherif Atia Algreatly Method of gaming
KR102494005B1 (en) * 2014-05-15 2023-01-31 레고 에이/에스 A toy construction system with function construction elements
US10607502B2 (en) 2014-06-04 2020-03-31 Square Panda Inc. Phonics exploration toy
US10825352B2 (en) * 2014-06-04 2020-11-03 Square Panda Inc. Letter manipulative identification board
US20160184724A1 (en) 2014-08-31 2016-06-30 Andrew Butler Dynamic App Programming Environment with Physical Object Interaction
CN104383697A (en) * 2014-11-25 2015-03-04 上海电机学院 Electronic building block and electronic building block group
WO2016165024A1 (en) * 2015-04-16 2016-10-20 Andrade James Braille instruction system and method
US10088280B2 (en) 2015-11-21 2018-10-02 Norma Zell Control module for autonomous target system
CN105641946B (en) * 2016-03-10 2017-11-17 深圳市翰童科技有限公司 Wireless electron building blocks
CN105727571B (en) * 2016-04-01 2017-10-31 深圳市翰童科技有限公司 Electronic building blocks
TWI619101B (en) * 2016-05-13 2018-03-21 國立臺灣師範大學 Puzzle learning system
US10847046B2 (en) 2017-01-23 2020-11-24 International Business Machines Corporation Learning with smart blocks
KR102366617B1 (en) * 2017-03-28 2022-02-23 삼성전자주식회사 Method for operating speech recognition service and electronic device supporting the same
US20190073915A1 (en) * 2017-09-06 2019-03-07 International Business Machines Corporation Interactive and instructional interface for learning
JP6795534B2 (en) * 2018-02-27 2020-12-02 振名 畢 Questioning system that combines a physical object and a computer
US11017129B2 (en) 2018-04-17 2021-05-25 International Business Machines Corporation Template selector
CN113365710B (en) * 2019-01-31 2022-11-08 乐高公司 Modular toy system with electronic toy modules
US11616844B2 (en) 2019-03-14 2023-03-28 Sphero, Inc. Modular electronic and digital building systems and methods of using the same

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020061701A1 (en) * 2000-04-28 2002-05-23 Chan Albert Wai Multiple part toy coding and recognition system
US6443796B1 (en) * 2000-06-19 2002-09-03 Judith Ann Shackelford Smart blocks
US20060154711A1 (en) * 2005-01-10 2006-07-13 Ellis Anthony M Multiply interconnectable environmentally interactive character simulation module method and system
US7184718B2 (en) * 2002-07-30 2007-02-27 Nokia Corporation Transformable mobile station

Family Cites Families (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3654706A (en) * 1970-06-23 1972-04-11 Donald J Perrella Educational device
US4342904A (en) 1980-10-27 1982-08-03 Minnesota Mining And Manufacturing Company Lightweight ferromagnetic marker for the detection of objects having markers secured thereto
US4703573A (en) * 1985-02-04 1987-11-03 Montgomery John W Visual and audible activated work and method of forming same
US5119077A (en) 1988-01-15 1992-06-02 Giorgio Paul J Interactive ballistic tracking apparatus
US5013245A (en) * 1988-04-29 1991-05-07 Benedict Morgan D Information shapes
US4936780A (en) * 1989-01-31 1990-06-26 Cogliano Mary A Touch sensor alpha-numeric blocks
US5072414A (en) 1989-07-31 1991-12-10 Accuweb, Inc. Ultrasonic web edge detection method and apparatus
US5183398A (en) 1990-06-01 1993-02-02 The Software Toolworks Apparatus and method for interactive instruction of a student
US5188533B1 (en) * 1990-06-01 1997-09-09 Leapfrog Rbt Llc Speech synthesizing indicia for interactive learning
US5396265A (en) 1990-09-17 1995-03-07 Massachusetts Institute Of Technology Three-dimensional tactile computer input device
US5228859A (en) 1990-09-17 1993-07-20 Interactive Training Technologies Interactive educational and training system with concurrent digitized sound and video output
US5146566A (en) 1991-05-29 1992-09-08 Ibm Corporation Input/output system for computer user interface using magnetic levitation
JPH06259021A (en) * 1993-03-05 1994-09-16 Nippon Steel Corp Combination type display device
US5328373A (en) * 1993-03-30 1994-07-12 Regna Lee Wood Method and apparatus for teaching reading
US5320358A (en) * 1993-04-27 1994-06-14 Rpb, Inc. Shooting game having programmable targets and course for use therewith
US5364272A (en) * 1993-08-09 1994-11-15 Texas Instruments Incorporated Apparatus and method for teaching
US5511980A (en) * 1994-02-23 1996-04-30 Leapfrog Rbt, L.L.C. Talking phonics interactive learning device
US5860653A (en) * 1995-05-15 1999-01-19 Jacobs; Robert Method and apparatus for playing a word game
US5823782A (en) * 1995-12-29 1998-10-20 Tinkers & Chance Character recognition educational system
US5991693A (en) * 1996-02-23 1999-11-23 Mindcraft Technologies, Inc. Wireless I/O apparatus and method of computer-assisted instruction
JPH1063176A (en) * 1996-08-13 1998-03-06 Inter Group:Kk Foreign language learning device and its recording medium
JPH10272255A (en) * 1997-04-01 1998-10-13 Ee D K:Kk Portable game machine provided with communication function and its communicating method
CA2225060A1 (en) * 1997-04-09 1998-10-09 Peter Suilun Fong Interactive talking dolls
JP4143158B2 (en) * 1997-04-16 2008-09-03 聯華電子股▲ふん▼有限公司 Data carrier
CN1267228A (en) * 1997-05-19 2000-09-20 创造者有限公司 Programmable assembly toy
US6271453B1 (en) * 1997-05-21 2001-08-07 L Leonard Hacker Musical blocks and clocks
JPH11341121A (en) * 1998-05-28 1999-12-10 Nec Corp Mobile radio equipment
US20020160340A1 (en) * 1998-07-31 2002-10-31 Marcus Brian I. Character recognition educational system
US6469689B1 (en) 1998-08-07 2002-10-22 Hewlett-Packard Company Appliance and method of using same having a capability to graphically associate and disassociate data with and from one another
US6473070B2 (en) 1998-11-03 2002-10-29 Intel Corporation Wireless tracking system
RU2136342C1 (en) 1998-12-09 1999-09-10 Григорьев Сергей Валерьевич Game with sounds, preferably musical sounds, and device which implements said game
US6149490A (en) * 1998-12-15 2000-11-21 Tiger Electronics, Ltd. Interactive toy
US6685479B1 (en) * 1999-02-22 2004-02-03 Nabil N. Ghaly Personal hand held device
JP3540187B2 (en) * 1999-02-25 2004-07-07 シャープ株式会社 Display device
US7456820B1 (en) 1999-05-25 2008-11-25 Silverbrook Research Pty Ltd Hand drawing capture via interface surface
US6462264B1 (en) 1999-07-26 2002-10-08 Carl Elam Method and apparatus for audio broadcast of enhanced musical instrument digital interface (MIDI) data formats for control of a sound generator to create music, lyrics, and speech
US6514085B2 (en) 1999-07-30 2003-02-04 Element K Online Llc Methods and apparatus for computer based training relating to devices
US6330427B1 (en) 1999-09-08 2001-12-11 Joel B. Tabachnik Talking novelty device with book
US6652457B1 (en) 1999-09-27 2003-11-25 Gary L. Skiba Stimulus-response conditioning process
US6654748B1 (en) 1999-12-07 2003-11-25 Rwd Technologies, Inc. Dynamic application browser and database for use therewith
US6620024B2 (en) * 2000-02-02 2003-09-16 Silverlit Toys Manufactory, Ltd. Computerized toy
US20020058235A1 (en) * 2000-02-29 2002-05-16 Dinnerstein Mitchell Elliot Jack switch talking block
US6353168B1 (en) * 2000-03-03 2002-03-05 Neurosmith, Llc Educational music instrument for children
US6491523B1 (en) 2000-04-28 2002-12-10 Janice Altman Sign language instruction system and method
US6551165B2 (en) * 2000-07-01 2003-04-22 Alexander V Smirnov Interacting toys
US6685477B1 (en) * 2000-09-28 2004-02-03 Eta/Cuisenaire, A Division Of A. Daigger & Company Method and apparatus for teaching and learning reading
JP4731008B2 (en) * 2000-12-05 2011-07-20 株式会社バンダイナムコゲームス Information providing system and information storage medium
US7170468B2 (en) * 2001-02-21 2007-01-30 International Business Machines Corporation Collaborative tablet computer
US6682392B2 (en) * 2001-04-19 2004-01-27 Thinking Technology, Inc. Physically interactive electronic toys
GB2376192A (en) 2001-05-25 2002-12-11 Tronji Ltd Cartridge based electronic display system.
US6679751B1 (en) * 2001-11-13 2004-01-20 Mattel, Inc. Stackable articles toy for children
US7347760B2 (en) * 2002-01-05 2008-03-25 Leapfrog Enterprises, Inc. Interactive toy
US7003598B2 (en) 2002-09-18 2006-02-21 Bright Entertainment Limited Remote control for providing interactive DVD navigation based on user response
US20040063078A1 (en) 2002-09-30 2004-04-01 Marcus Brian I. Electronic educational toy appliance
EP1486237A1 (en) * 2003-06-13 2004-12-15 Hausemann en Hötte BV Puzzle system
JP4073885B2 (en) * 2003-06-17 2008-04-09 任天堂株式会社 GAME SYSTEM, GAME DEVICE, AND GAME PROGRAM
KR100528476B1 (en) 2003-07-22 2005-11-15 삼성전자주식회사 Interupt processing circuit of computer system
US7316567B2 (en) * 2003-08-01 2008-01-08 Jennifer Chia-Jen Hsieh Physical programming toy
US7336256B2 (en) 2004-01-30 2008-02-26 International Business Machines Corporation Conveying the importance of display screen data using audible indicators
JP4241429B2 (en) * 2004-02-25 2009-03-18 奈津 川北 Display device
US7242369B2 (en) * 2004-10-26 2007-07-10 Benq Corporation Method of displaying text on multiple display devices
US7238026B2 (en) * 2004-11-04 2007-07-03 Mattel, Inc. Activity device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020061701A1 (en) * 2000-04-28 2002-05-23 Chan Albert Wai Multiple part toy coding and recognition system
US6443796B1 (en) * 2000-06-19 2002-09-03 Judith Ann Shackelford Smart blocks
US7184718B2 (en) * 2002-07-30 2007-02-27 Nokia Corporation Transformable mobile station
US20060154711A1 (en) * 2005-01-10 2006-07-13 Ellis Anthony M Multiply interconnectable environmentally interactive character simulation module method and system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140104154A1 (en) * 2012-10-11 2014-04-17 Casio Computer Co., Ltd. Information output apparatus for outputting the same information as another apparatus and method
CN104064068A (en) * 2014-06-17 2014-09-24 王岳雄 Parent-children interaction learning method and device for implementing parent-children interaction learning method
US20170095729A1 (en) * 2015-10-04 2017-04-06 Shari Spiridigliozzi Electronic word game
US10413808B2 (en) * 2015-10-04 2019-09-17 Shari Spiridigliozzi Electronic word game

Also Published As

Publication number Publication date
MX2007011816A (en) 2008-01-16
EP2363848A2 (en) 2011-09-07
JP2008534996A (en) 2008-08-28
US20120007840A1 (en) 2012-01-12
RU2007139277A (en) 2009-04-27
CN101185108A (en) 2008-05-21
JP5154399B2 (en) 2013-02-27
DE602006020726D1 (en) 2011-04-28
EP2369563A2 (en) 2011-09-28
DK1899939T3 (en) 2011-06-20
US20060215476A1 (en) 2006-09-28
CN101185108B (en) 2012-11-21
ES2364956T3 (en) 2011-09-19
US8057233B2 (en) 2011-11-15
EP2369563A3 (en) 2011-12-14
GB0506159D0 (en) 2005-05-04
GB2424510A (en) 2006-09-27
ATE502371T1 (en) 2011-04-15
EP2363848A3 (en) 2011-12-14
RU2408933C2 (en) 2011-01-10

Similar Documents

Publication Publication Date Title
US8057233B2 (en) Manipulable interactive devices
AU2006226156B2 (en) Manipulable interactive devices
US20130302763A1 (en) Interactive system and method of modifying user interaction therein
Tollervey Programming with MicroPython: embedded programming with microcontrollers and Python
KR102049030B1 (en) Block system for learning and method providing learning contents
WO2018086616A1 (en) Changeable book system
Merchant Moving with the times: How mobile digital literacies are changing childhood
US20030129572A1 (en) Learning center
CN105917293B (en) The system and method interacted using object with language element
KR101246919B1 (en) Voice output system using by mat printed oid code and the control method thereof
KR101407594B1 (en) Appratus for providing educational contents, and method for providing contents thereof
WO2012056459A1 (en) An apparatus for education and entertainment
JP4715070B2 (en) Learning device
WO2010029539A1 (en) Customized educational toy
KR20140092598A (en) Education device using RFID
US20240050871A1 (en) Interactive toy-set for playing digital media
JP2009080260A (en) Language learning material and language learning system for infants
KR20070033632A (en) EBook with touchscreen, buttons, and speakers
Merchant Moving with the times
PACKAGES Pocket Rockets: The Past
US20120062452A1 (en) Method and apparatus for teaching a child with an electronic device
WO2000045910A2 (en) Toy responsive to sensed resistance

Legal Events

Date Code Title Description
AS Assignment

Owner name: MITILE LTD., UNITED KINGDOM

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SMALTI TECHNOLOGY LTD.;REEL/FRAME:030081/0890

Effective date: 20130322

AS Assignment

Owner name: EDWARDS, THOMAS JOSEPH, MR, UNITED KINGDOM

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MITILE LTD;REEL/FRAME:033558/0555

Effective date: 20140818

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION