WO2004114644A2 - Apparatus having cooperating wide-angle digital camera system and microphone array - Google Patents

Apparatus having cooperating wide-angle digital camera system and microphone array Download PDF

Info

Publication number
WO2004114644A2
WO2004114644A2 PCT/US2003/002235 US0302235W WO2004114644A2 WO 2004114644 A2 WO2004114644 A2 WO 2004114644A2 US 0302235 W US0302235 W US 0302235W WO 2004114644 A2 WO2004114644 A2 WO 2004114644A2
Authority
WO
WIPO (PCT)
Prior art keywords
audio
microphones
angle
wide
microphone array
Prior art date
Application number
PCT/US2003/002235
Other languages
French (fr)
Other versions
WO2004114644A3 (en
Inventor
Michael L. Charlier
Robert A. Zurek
Thomas R. Schirtzinger
William L. Reber
Christopher B. Galvin
Original Assignee
Motorola, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Motorola, Inc. filed Critical Motorola, Inc.
Priority to AU2003304231A priority Critical patent/AU2003304231A1/en
Publication of WO2004114644A2 publication Critical patent/WO2004114644A2/en
Publication of WO2004114644A3 publication Critical patent/WO2004114644A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2625Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of images from a temporal image sequence, e.g. for a stroboscopic effect
    • H04N5/2627Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of images from a temporal image sequence, e.g. for a stroboscopic effect for providing spin image effect, 3D stop motion effect or temporal freeze effect
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/142Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones

Definitions

  • the present invention relates to wide-angle digital camera systems and beam-steered microphone arrays.
  • Immersive video technology enables pan, tilt and zoom camera functions to be performed electronically without physically moving a camera.
  • An example of an immersive video technology is disclosed in U.S. Patent No. 5,185,667 to Zimmermann.
  • U.S. Patent No. 5,686,957 to Baker discloses a teleconferencing imaging system with automatic camera steering.
  • the system comprises a video camera and lens system that provides a panoramic image.
  • the system detects the direction of a particular speaker within the panoramic image using an array of microphones.
  • Direction signals are provided to electronically select a portion of the image corresponding to the particular speaker.
  • an audio directive component is comprised of four microphones spaced apart and arranged concentrically about the camera and lens system. The above combination is placed on a conference room table so that all participants have audio access to the microphones. Differences in audio signal amplitude obtained from each microphone are detected to determine the closest microphone to a current participant speaker. A point between microphones may be selected as the
  • Past related systems have used a table-mounted system that had little or no use for a high pixel density in the center of a 180 degree or 360 degree optical system.
  • This implementation has drawbacks for both teleconference applications and security applications.
  • One drawback is that objects or participants that lie in the same angle around the device as another object, but lie behind the other object, are obstructed from view and/or difficult to separate by the device.
  • This drawback is especially exaggerated in security applications where many of the objects that the user would want to observe are resting on a horizontal surface distributed across a room or an external area.
  • separation of audio signals of two persons, one sitting behind another is problematic.
  • side conversations are a pariah to teleconferences. Participants are often likely to strike side conversations when all the participants are not present in the same room. Often these side conversations are all that a remote user can hear when the system in use utilizes a distributed microphone array which may have a microphone element in close proximity to parties involved in the side conversation. Also, tabletop mounted systems are prone to noises transmitted through the table by attendees moving materials such as papers, or rapping objects on the table. This vibration coupling of the microphones is difficult to isolate and often has a higher sensitivity than the people talking in the room.
  • table mounted teleconferencing systems require an additional document camera when the users desired to share one or more printed documents with remote attendees .
  • FIG. 1 is a block diagram of an embodiment of an immersive audio/video apparatus
  • FIG. 2 is a block diagram of another embodiment of an immersive audio/video apparatus
  • FIG. 3 is an illustration of an embodiment of an apparatus of either FIG. 1 or FIG. 2;
  • FIG. 4 illustrates use of an embodiment of an immersive audio/video apparatus in a teleconferencing application
  • FIG. 5 illustrates an embodiment of a two- dimensional circular microphone array
  • FIG. 6 illustrates an embodiment of a microphone array comprising microphones located at vertices of a truncated icosahedron
  • FIG. 7 illustrates an embodiment of a multi-ring microphone array
  • FIG. 8 illustrates a video zooming process.
  • the present disclosure contemplates a directional microphone array that is coupled to either a 180 degree or a 360 degree immersive digital video camera, wherein a direction of an audio event is determinable in at least two degrees of freedom, and a portion of the immersive video in the direction of the audio event is automatically selected and transmitted. Based on its frequency profile, the audio event may further initiate the transmission of an alarm signal .
  • the directional microphone array is either automatically or manually steered and zoomed to track panning and zooming of an immersive video.
  • a microphone array comprising a plurality of individual microphone elements mounted to a semispherical housing to allow directionality in both an azimuth angle and an altitude angle.
  • the microphone array allows accurate beam positioning over an entire hemisphere.
  • the microphone array may be extended to a full spherical array, which is suitable for use with two cameras having hemispherical fields of view.
  • Embodiments of the apparatus may be either table mounted or mounted overhead. In teleconferencing applications, the device may be mounted slightly above the head level of the tallest attendee. This position allows the visualization and isolation of persons or objects seated behind the first row of attendees.
  • the overhead system allows an image of a document placed on a tabletop to be acquired with a higher density of pixels.
  • the overhead position allows the efficient use of a three-dimensional microphone array to separate these distinct audio sources. Where prior devices have used either a two- dimensional array or a plethora of single microphones — one for each individual user — a three dimensional array can be used to sense the direction of the source much more efficiently using software-generated compound microphones. This beneficially mitigates the prospect of falsely locating an audio source.
  • FIG. 1 is a block diagram of an embodiment of an immersive audio/video apparatus.
  • the apparatus comprises a microphone array 20 to sense an audio source 22.
  • the microphone array 20 comprises a sufficient number of microphones arranged in a suitable pattern to sense a direction 24, comprising both an azimuth angle 26 and an altitude angle 30, of the audio source 22 in relation to a frame of reference 32.
  • the microphones may comprises any combination of individually-directional microphones and/or omnidirectional microphones to serve the aforementioned purpose.
  • the term "audio" should be construed to be inclusive of acoustic pressure waves.
  • An audio processor 34 is responsive to the microphone array 20 to determine the direction 24, comprising both the azimuth angle 26 and the altitude angle 30, of the audio source 22.
  • the audio processor 34 outputs one or more signals 36 indicative of the direction 24.
  • the audio processor 34 may generate a first signal indicating the azimuth angle and a second signal indicating the altitude angle.
  • the audio processor 34 may output an audio signal 38 as sensed by the microphone array 20.
  • the audio processor 34 may process various channels from the microphone array 20 to effectively beam-steer and/or modify a beam width of the microphone array 20 toward the audio source 22.
  • the audio processor 34 may further perform noise reduction acts in generating the audio signal 38.
  • the apparatus further comprises a wide-angle digital camera system 40.
  • the wide-angle digital camera system 40 has a field of view 42 greater than 50 degrees, and more preferably, greater than 120 degrees. In exemplary embodiments, the field of view 42 ranges from at least 180 degrees to about 360 degrees.
  • the wide-angle digital camera system 40 may include an optical element such as a fisheye lens which facilitates all objects in the field of view 42 being substantially in focus. However, many other wide-angle lenses using either traditional optics or holographic elements are also suitable for this application.
  • the wide-angle digital camera system 40 may comprise a convex mirror to provide the wide-angle field of view 42.
  • the wide-angle digital camera system 40 captures at least one, and preferably a sequence of wide-angle images .
  • the wide-angle images include images of the audio source 22. Depending on its location, the audio source 22 may be located anywhere within the wide-angle images .
  • An image processor 44 is responsive to the audio processor 34 and the wide-angle digital camera system 40.
  • the image processor 44 processes one or more wide-angle images to generate at least one, and preferably a sequence of perspective corrected images 46 in the direction 24 of the audio source 22.
  • the image processor 44 selects a portion of the wide-angle images based on the direction signals 36 so that the audio source 22 is about centered therein, and corrects the distortion introduced by the wide-angle optical element (s) for the portion.
  • the perspective corrected images 46 include an image of the audio source 22 about centered therein regardless of the azimuth angle 26 and the altitude angle 30.
  • the perspective corrected images 46 may be outputted either to a display device for viewing same, to a mass storage device for storing same, or to a transmitter for remote viewing or storage.
  • the audio processor 34 may determine the direction 24 of a greatest local amplitude in a particular audio band.
  • the particular audio band may comprise a human voice band.
  • the audio processor 34 filters signals from the microphone array 20 to attenuate non-human-voice audio sources 50 (e.g. an air conditioning system) with respect to the audio source 22.
  • non-human-voice audio sources 50 e.g. an air conditioning system
  • the audio processor 34 may determine the direction 22 based on a limited-duration audio event.
  • Examples of limited- duration audio events include, but are ' not limited to, a gun shot, glass breaking and a door being battered.
  • the image processor 44 may process the wide-angle images to generate the perspective corrected images 46 in the direction 24 after the limited-duration audio event has ended.
  • Limited-duration audio events are typical in security applications.
  • the audio processor 34 may compare a profile of the audio source 22 to a pre-stored profile.
  • the comparison may be performed in a time domain and/or a frequency domain.
  • a wavetable lookup is performed to compare the profile of the audio source 22 to a plurality of pre- stored profiles. If the profile of the audio source 22 sufficiently matches one of the pre-stored profiles, the audio processor 34 may initiate an action such as transmitting an alarm signal. The alarm signal augments the perspective corrected image 46 corresponding to the direction 24 of the audio source 22.
  • the use of profile comparisons is well-suited for security applications, wherein a gun shot profile, a glass-breaking profile, and other security event profiles are pre-stored.
  • Profile comparisons may be either inclusionary or exclusionary in nature.
  • the action is initiated if the profile sufficiently matches the pre-stored profile.
  • the action is inhibited if the profile sufficiently matches the pre-stored profile.
  • exclusionary pre-stored profiles is beneficial to mitigate occurrences of false alarms. For example, if a specific sound, such as thunder associated with a lighting bolt, causes an undesired initiation of the alarm, a user may actuate an input device (e.g. depress a button) to indicate that the specific sound should be stored as an exclusionary pre-stored profile. As a result, subsequent thunder events would not initiate the alarm.
  • the microphone array 20, the audio processor 34, the wide-angle digital camera system 40 and the image processor 44 may be housed in a single unit.
  • the microphone array 20 and the wide-angle digital camera system 40 are collocated in a capture unit, and the audio processor 34 and the image processor 44 are collocated in a processing unit.
  • the capture unit may comprise a wireless transmitter and the processor unit may comprise a wireless receiver. The transmitter and receiver provide a wireless link to transmit audio signals from the microphone array 20 to the audio processor 34, and wide-angle image signals from the wide-angle digital camera system 40 to the image processor 44.
  • FIG. 2 is a block diagram of another embodiment of an immersive audio/video apparatus.
  • the apparatus comprises a wide-angle digital camera system 60, such as the wide-angle digital camera system 40, and a microphone array 62, such as the microphone array 20.
  • An image processor 64 processes one or more wide-angle images from the wide-angle digital camera system 60 to generate one or more perspective corrected images 66.
  • the portion of the wide-angle images used to define the perspective corrected images is defined by a plurality of parameters. Examples of the parameters include a pan parameter 70, a tilt parameter 72, and a zoom parameter 74.
  • the center of the portion is defined by the pan parameter 70 and the tilt parameter 72.
  • the pan parameter 70 indicates an angle 75 along a first plane, such as a horizontal plane, and the tilt parameter 72 indicates an angle 76 along a second plane, such as a vertical plane.
  • the width of the portion is defined by the zoom parameter 74.
  • the parameters may be provided by a user interface, or by the output of a processor. A user, such as either a content director or a viewer, adjusts the parameters using the user interface. A content director can use the apparatus to create content such as movies, sporting event content and theater event content.
  • An audio processor 78 is responsive to the microphone array 62 to modify a directionality of the microphone array 62 to correspond to the portion of the wide-angle image defined by the parameters.
  • the directionality may be modified based on the pan parameter 70 and the tilt parameter 72.
  • the audio processor 78 may further cooperate with the image processor 64 to effectively modify a beam width of the microphone array 62 based on the zoom parameter 74.
  • an object 80 which may be a window in security applications or a human in teleconferencing applications, within a field of view 82 of the wide-angle digital camera system 60.
  • the pan parameter 70 and the tilt parameter 72 may be provided to center the object 80 within the perspective corrected images 66.
  • the zoom parameter 74 may be provided to exclude other objects 84 and 86 from the perspective corrected images 66.
  • the audio processor 78 processes signals from the microphone array 62 to effectively steer toward the object 80.
  • the audio processor 78 may process signals from the microphone array 62 to vary a beam width about the object 80.
  • the audio processor 78 produces an audio output 90 which senses audio produced at or near the object 80.
  • FIG. 3 is an illustration of an embodiment of an apparatus of either FIG. 1 or FIG. 2.
  • the apparatus comprises a housing 100 having a base 102 and a dome- shaped portion 104.
  • the base 102 is suited for support by or mounting to a flat surface such as a table top, a wall, or a ceiling.
  • the dome-shaped portion 104 may be substantially semispherical or have an alternative substantially convex form.
  • semispherical is defined as any portion of a sphere, including but not limited to a hemisphere and an entire sphere. Substantially semispherical forms include those that piecewise approximate a semisphere.
  • the microphone array comprises a plurality of microphones 106 disposed in a semispherical pattern about the dome-shaped portion 104.
  • the microphones 106 may be arranged in accordance with a triangular or hexagonal packing distribution, wherein each microphone is centered within a corresponding one of a plurality of spherical triangles or hexagons.
  • the housing 100 houses and/or supports the wide- angle digital camera system.
  • the wide-angle digital camera system has a hemispherical field of view emanating about at a peak 110 of the dome-shaped portion 104.
  • the housing 100 may further house the wireless transmitter described with reference to FIG. 1, or the audio processor (34 or 76) and the image processor (44 or 64).
  • FIG. 1 the embodiment of FIG. 3 is capable of detecting an audio source 112 anywhere within the hemispherical field of view, determining the direction of the audio source, and generating a perspective corrected image sequence of the audio source.
  • the embodiment of FIG. 3 is capable of panning and zooming wide-angle images to a specific target anywhere within the hemispherical field of view, and automatically having the audio output track the specific target.
  • FIG. 4 illustrates use of an embodiment of an immersive audio/video apparatus in a teleconferencing application.
  • a capture unit 152 such as the one shown in FIG. 3, is preferably mounted overhead of a first person 156, a second person 158 and a third person 160.
  • the capture unit 152 may be mounted to a ceiling by an extendible/retractable member (not specifically illustrated) such as a telescoping member. Using the member, the capture unit 152 can be deployed down to nearly head level when being used, and returned up toward the ceiling when not being used for a teleconference (but possibly being used for a security application) .
  • a capture unit 152' may be placed on a table 154. For purposes of illustration and example, the first person 156 is standing by the table 154, and the second person 158 and the third person 160 are seated at the table 154.
  • the capture unit 152 wirelessly communicates a plurality of audio signals and a sequence of wide-angle images having a hemispherical field of view to a processing unit 162.
  • the processing unit 162 detects the directions of the persons 156, 158 and 160 with respect to the capture unit 152.
  • the processing unit 162 outputs three perspective-corrected image sequences: a first sequence of the person 156, a second sequence of the person 158 and a third sequence of the person 160.
  • the processing unit 162 communicates the image sequences, along with the sensed audio, to a computer network 164. Examples of the computer network 164 include, but are not limited to, an internet, an intranet or an extranet.
  • a fourth person (not illustrated) is seated at his/her personal computer 174.
  • the computer 174 receives the image sequences and the audio via the computer network 164.
  • the computer 174 includes a display 176 which simultaneously displays the three image sequences in three display portions 180, 182 and 184.
  • the display portions 180, 182 and 184 may comprise windows, panes, or alternative means of display segmentation .
  • the processing unit 162 may steer the microphone array toward one or more persons who are speaking at the time.
  • the processing unit 162 may transmit the different perspective corrected image sequences using different frames rates.
  • a higher frame rate is used for a speaking participant in contrast to a non-speaking participant, as sensed by the microphone array.
  • Image sequences of speaking participants may be transmitted in a video mode of greater than or equal to 15 frames per second, for example.
  • Image sequences of non-speaking participants may comprise still images which are transmitted at a significantly slower rate. The still images may be periodically refreshed based on a time constant and/or movement detected visually using the processing unit 162.
  • image' mapping techniques such as face detection may be used to sense the location of the persons 156, 158, and 160 at all times during the call.
  • Each person's face may be substantially centered within an image stream using the results of the image mapping.
  • Image mapping may comprise visually determining one or more persons who are speaking. Image mapping may be used to track persons while they are not speaking.
  • the processing unit 162 may steer the microphone array toward one or more persons who are speaking at the time.
  • the capture unit 152 can be made to mount anywhere due to its size and the inclusion of a one-way wireless link to a processing unit. Since all of the audio and video processing is performed in the processing unit 162, the capture unit 152 serves its purpose by transmitting a continuous stream of audio from each microphone channel and wide-angle video.
  • the wireless link may comprise a BLUETOOTH link, an 802.11(b) link, a wireless telephone link, or any other secure or non-secure link depending on the specific application.
  • the ceiling mount or other overhead orientation of the capture unit 152 allows the center of the camera to be used as a document camera. A higher density of pixels in the center is used to resolve the fine detail required to transmit an image of a printed document.
  • the capture unit 152 and the processing unit 162 may cooperate to provide one or more perspective corrected images of a hard copy document 186 on the table 154.
  • the display 176 displays the one or more images in a display region 190.
  • microphones are placed in diametrically- opposed positions equally spaced about a sphere.
  • the microphones are positioned both equidistantly and symmetrically about each individual microphone. All microphones have the same arrangement of microphones around them, i.e. there is not one number of microphones immediately surrounding some locations and a different number of microphones immediately surrounding another location.
  • Certain three-dimensional geometric figures approximate a sphere in such a way at either the center of their faces or their vertices.
  • the simplest ones of these figures include the tetrahedron and the cube. However, these two figures have an insufficient microphone density to allow adequate zooming of the microphone beam.
  • Figures such as the dodecahedron, the icosahedron, and the truncated icosahedron follow the prescribed location rules and allow for robust compound microphone creation.
  • n is an integer greater than zero.
  • This combined with directional cardioid microphones at each face or vertex allows for the creation of definable main beam widths with nearly nonexistent side lobes. This is possible because a summation of opposing microphones creates an omnidirectional microphone, and a difference of said microphones creates an acoustic dipole.
  • These compound omnidirectional and dipole microphones are used as building blocks for higher-order compound microphones used in the localized playback of the system.
  • a beam can be formed in software that not only has significant reduction outside of its bounds, but also can maintain a constant beam width while being steered at any angle between neighboring microphones.
  • the entire sphere can be covered with equal precision and reduction in acoustic signals emanating from sources outside of its beamwidth.
  • the aforementioned orientations of microphones on a sphere allow for a higher-order compound microphone that can be defined as a relationship of the difference of two on-axis microphones times the nearest on-axis microphone, multiplied by the same relation for each of the nearest equidistant microphone pairs.
  • this expression reduces to ml (cl*ml- rt ⁇ 2) *m3 (c2*m3-m4) *m5 (c3*m5-m6) , where ml to m8 represent eight microphone elements, and en are constants that determine the direction of the beam relative to an axis 200 defined through microphones ml and m2.
  • the first compound element comprised of the ml and m.2 microphone elements is a variation of a second-order cardioid.
  • the second term which is comprised of the elements m.3, m4, m.5 and m6, are the closest surrounding pairs. If one wishes to further increase the order of the compound microphone, the next closest sets of pairs would be included with their sets of coefficients en until the order of the array is reached. In this way, the zoom function of the microphone array may be practiced.
  • the lowest order zoom function is a cardioid microphone closest to the source.
  • the next level is a second-order modified cardioid directed at the source.
  • the next level is an order involving all of the adjacent microphone pairs as shown above for the two-dimensional circular array. This process may be continued using expanding layers of equidistant microphones until a desired level of isolation is achieved.
  • FIG. 6 shows an example of microphones ml' to m ⁇ ' located at vertices of a truncated icosahedron whose edges are all the same size (e.g. a bucky ball) .
  • the form of the higher-order compound beaming function is defined as follows: ml' (cl'*ml' ⁇ m2')*m3' (c2' *m3' -m4' ) *m5' (c3' *m5' -m6' ) *m7' (c4'*m7'-m8' ) .
  • the first adjacent ring of equidistant microphones contains three microphone pairs.
  • the second ring of nearly equidistant microphones would contain 6 pairs, and so on.
  • the variation of the coefficients en' effectively steers the beam to any angle in altitude or azimuth with nearly constant beamwidth given the proper values of the en' s and using the closest microphone as ml.
  • An implementation of this type of system using a half sphere would incorporate half the microphones used in the full sphere plus one additional omnidirectional microphone. The same placement rules are used for the half sphere as in the full sphere. With the addition of the single omnidirectional microphone, the same level of processing is available for beam direction and manipulation.
  • An equivalent dipole microphone can be provided by subtracting an individual cardioid from the omnidirectional microphone.
  • the same series of caridioid times dipole is possible by merely changing the series to ml* (cl*m0-ml) *m3 (c2*m0-m3) *m5 (c3*m0-m5) *m7* (c4*m0-m7) , where mO is the omnidirectional microphone.
  • the array can also be reduced to two or more rings of microphones mounted around the base of the camera and processed similar to the two-dimensional array in FIG. 5 except in azimuth and a small arc of altitude.
  • This technique has a limited range of vertical steering, but maintains the horizontal range and precision.
  • An example of such an array of coaxial and non-concentric rings is shown in FIG. 7.
  • the microphone pairs are defined by matching a microphone 210 on a top ring 212 of the unit with a diametrically opposed microphone 214 on a bottom ring 216. If the array consists of an odd number of rings, a pair of diametrically opposed microphones 220 and 222 in a center ring 224 are employed.
  • Automatic acoustic-based steering of the microphone array 20 and wide-angle digital camera system 40 in FIG. 1 may be accomplished by first examining a frequency- band-limited amplitude of each of a series of compound microphones whose beam axis lies on an axis through each microphone capsule, and whose beam width is equal to an angular distance between an on-axis microphone and a nearest neighbor microphone.
  • This beam can be achieved by combining signals produced by an on-axis microphone pair and a closest ring of accompanying microphone pairs. This process mitigates, and preferably eliminates, the possibility of false images due to microphone overlap as previously discussed.
  • the next step includes comparing the output of several newly-created virtual compound microphones spaced within an area of the original compound beam.
  • Each of the resulting beams have the same beam width as the original compound beam, thus allowing overlap between the new beams.
  • the overlap of subsequent beams can be used to very accurately locate the audio source 22 within the solid angle of the original beam.
  • the beam can be narrowed by including the next closest ring of equidistant microphones. This iterative process occurs over time, resulting in a reduced initial computation time and a visual and audible zooming on a subject as he/she speaks.
  • the effect of the audible zoom is to reduce other audible noise while the speaker's voice level remains about constant.
  • the audio zoom process proceeds as described earlier by beginning with the cardioid signal closest to the audio source 22, switching to the second-order cardioid, and then to higher-order steered beams aimed at the audio source 22 as time progresses.
  • the video follows a similar zooming process, as illustrated in FIG. 8.
  • the image processor 44 initially generates a perspective corrected image sequence of a quadrant 240 which includes an audio source (e.g. a human 242 that is speaking) .
  • the image processor 44 generates a perspective corrected image sequence of a smaller portion 244 which includes the human 242.
  • the image processor 44 generates a perspective corrected image sequence of an even smaller portion 246 which provides a head-and-shoulder shot of the human 242.
  • the gradual, coordinated zooming of the audio and video signals act to reduce a so-called "popcorn" effect of switching between two very different zoomed-in audio and video sources, especially if the two sources are physically near each other.
  • An alternative implementation of the auto-tracking feature comprises using the first step of the above- described audio location method to find a general location of the subject.
  • the general location of the human 242 is determined to be within the portion 244. Center coordinates of the general location are communicated to the image processor 44.
  • a video mapping technique is used to identify all the possible audio sources within the general location.
  • the human 242 and a non-speaking human 250 are possible audio sources within the general location indicated by the portion 244. Coordinates of these possible sources are fed back to the audio processor 34.
  • the audio processor 34 determines which of the potential sources is speaking using virtual compound microphones directed at the potential sources . Once the audio source is identified, the audio processor 34 sends the coordinates of the audio source to the image processor 64.
  • the audio processor 34 also manipulates the incoming audio data stream to focus the beam of the microphone array 62 on the coordinates of the head of the human 242. This process utilizes a gradual zooming technique as described above.
  • Embodiments of the herein-disclosed inventions may be used in a variety of applications. Examples include, but are not limited to, teleconferencing applications, security applications, and automotive applications.
  • automotive applications the capture unit may be mounted within a cabin of an automobile. The capture unit is mounted to a ceiling in the cabin, and located to obtain wide-angle images which include a driver, a front passenger, and any rear passengers. Any individual in the automobile may use the apparatus to place calls. Audio beam steering toward the speaking individual is beneficial to reduce background noise.
  • the capture unit may be autonomously mobile. For example, the capture unit may be mounted to a movable robot for an airport security application.
  • the microphones in the microphone array may be arranged in a two-dimensional pattern such as one shown in FIG. 5.
  • microphone array may comprise a ring of microphones disposed around the base of the capture unit. This configuration would allow precise positioning of the transmitting audio source in the azimuth angle, but would not discriminate to the same extent in the altitude angle.
  • the wide-angle digital camera system may be sensitive to non-visible light, such as infrared light, in contrast to visible light.
  • the wide-angle digital camera system may have a low-light mode to capture images with a low level of lighting.
  • the herein-described profile comparisons may be used to automatically recognize a person's voice. Upon recognizing a person's voice, textual and/or graphical information indicating the person' s name, title, company, and/or affiliation may be included as a caption to his/her images.
  • computer-generated images may be displayed in the display region 190.
  • a word processing document may be shown in the display region 190 for collaborative work by the participants.
  • computer-generated presentation slides may be displayed in the display region 190.
  • Other collaborative computing applications are also enabled using the display region 190.
  • the herein-disclosed capture units may be powered in various ways, including but not limited to, mains power, a rechargeable or non-rechargeable battery, solar power or wind-up power.
  • the herein-disclosed processing units may be either integrated with or interfaced to a wireless mobile telephone, a set-top box, a cable modem, or a general purpose computer, to remotely communicate images and audio.
  • the herein-disclosed processing units may be integrated with a circuit card that interfaces with either a wireless mobile telephone, a set-top box, a cable modem, or a general purpose computer, to remotely communicate images and audio.
  • the images and audio generated by the processing unit may be remotely received by a wireless mobile telephone, a set-top box, a cable modem, or a general purpose computer.

Abstract

A microphone array (20) senses an audio source (22). An audio processor (34) is responsive to the microphone array (20) to determine a direction (24) of the audio source (22) in relation to a frame of reference (32). The direction (24) comprises an azimuth angle (26) and an altitude angle (30). A wide-angle digital camera system (40) captures at least one wide-angle image. An image processor (44) is responsive to the audio processor (34) to process the at least one wide-angle image to generate at least one perspective corrected image (46) in the direction (24) of the audio source (22).

Description

APPARATUS HAVING COOPERATING WIDE-ANGLE DIGITAL CAMERA SYSTEM AND MICROPHONE ARRAY
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to wide-angle digital camera systems and beam-steered microphone arrays.
2. Description of the Related Art
Immersive video technology enables pan, tilt and zoom camera functions to be performed electronically without physically moving a camera. An example of an immersive video technology is disclosed in U.S. Patent No. 5,185,667 to Zimmermann.
Various applications of immersive video technology have been disclosed in U.S. Patent Nos. 5,594,935, 5,706,421, 5,894,589 and 6,111,568 to Reber et al. One application of particular interest is teleconferencing using immersive video.
U.S. Patent No. 5,686,957 to Baker discloses a teleconferencing imaging system with automatic camera steering. The system comprises a video camera and lens system that provides a panoramic image. The system detects the direction of a particular speaker within the panoramic image using an array of microphones. Direction signals are provided to electronically select a portion of the image corresponding to the particular speaker.
In one embodiment, an audio directive component is comprised of four microphones spaced apart and arranged concentrically about the camera and lens system. The above combination is placed on a conference room table so that all participants have audio access to the microphones. Differences in audio signal amplitude obtained from each microphone are detected to determine the closest microphone to a current participant speaker. A point between microphones may be selected as the
"closest microphone" using normal audio beam steering techniques. This approach is amenable in teleconferences where a number of participants far exceeds the number of microphones. A segment of the panoramic image which correlates with the "closest microphone" is selected to provide the current speaker's image.
Past related systems have used a table-mounted system that had little or no use for a high pixel density in the center of a 180 degree or 360 degree optical system. This implementation has drawbacks for both teleconference applications and security applications. One drawback is that objects or participants that lie in the same angle around the device as another object, but lie behind the other object, are obstructed from view and/or difficult to separate by the device. This drawback is especially exaggerated in security applications where many of the objects that the user would want to observe are resting on a horizontal surface distributed across a room or an external area. Like in the video domain, separation of audio signals of two persons, one sitting behind another, is problematic.
Further, side conversations are a pariah to teleconferences. Participants are often likely to strike side conversations when all the participants are not present in the same room. Often these side conversations are all that a remote user can hear when the system in use utilizes a distributed microphone array which may have a microphone element in close proximity to parties involved in the side conversation. Also, tabletop mounted systems are prone to noises transmitted through the table by attendees moving materials such as papers, or rapping objects on the table. This vibration coupling of the microphones is difficult to isolate and often has a higher sensitivity than the people talking in the room.
Still further, table mounted teleconferencing systems require an additional document camera when the users desired to share one or more printed documents with remote attendees .
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention is pointed out with particularity in the appended claims. However, other features are described in the following detailed description in conjunction with the accompanying drawings in which:
FIG. 1 is a block diagram of an embodiment of an immersive audio/video apparatus; FIG. 2 is a block diagram of another embodiment of an immersive audio/video apparatus;
FIG. 3 is an illustration of an embodiment of an apparatus of either FIG. 1 or FIG. 2;
FIG. 4 illustrates use of an embodiment of an immersive audio/video apparatus in a teleconferencing application;
FIG. 5 illustrates an embodiment of a two- dimensional circular microphone array;
FIG. 6 illustrates an embodiment of a microphone array comprising microphones located at vertices of a truncated icosahedron;
FIG. 7 illustrates an embodiment of a multi-ring microphone array; and
FIG. 8 illustrates a video zooming process. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
Disclosed herein are systems and methods to improve user presence and intelligibility in live audio/video applications such as teleconferencing applications and security monitoring applications. The present disclosure contemplates a directional microphone array that is coupled to either a 180 degree or a 360 degree immersive digital video camera, wherein a direction of an audio event is determinable in at least two degrees of freedom, and a portion of the immersive video in the direction of the audio event is automatically selected and transmitted. Based on its frequency profile, the audio event may further initiate the transmission of an alarm signal .
Further disclosed is an apparatus wherein the directional microphone array is either automatically or manually steered and zoomed to track panning and zooming of an immersive video.
Still further disclosed is a microphone array comprising a plurality of individual microphone elements mounted to a semispherical housing to allow directionality in both an azimuth angle and an altitude angle. In the case of a hemisphere, the microphone array allows accurate beam positioning over an entire hemisphere. The microphone array may be extended to a full spherical array, which is suitable for use with two cameras having hemispherical fields of view. Embodiments of the apparatus may be either table mounted or mounted overhead. In teleconferencing applications, the device may be mounted slightly above the head level of the tallest attendee. This position allows the visualization and isolation of persons or objects seated behind the first row of attendees. Further, a more cosmetically-acceptable view for the remote user is provided, as he/she is not looking up the noses of the remote participants. Still further, the overhead system allows an image of a document placed on a tabletop to be acquired with a higher density of pixels. Also, the overhead position allows the efficient use of a three-dimensional microphone array to separate these distinct audio sources. Where prior devices have used either a two- dimensional array or a plethora of single microphones — one for each individual user — a three dimensional array can be used to sense the direction of the source much more efficiently using software-generated compound microphones. This beneficially mitigates the prospect of falsely locating an audio source. By creating a compound microphone that has a beam width limited to the separation between microphone locations, an overlap error that is inherent in selecting a source using single element directional or omnidirectional microphones is mitigated, and preferably eliminated. Further, other useful aspects of the microphone array such as noise reduction of environment and other participants carrying one side conversations can be exploited. FIG. 1 is a block diagram of an embodiment of an immersive audio/video apparatus. The apparatus comprises a microphone array 20 to sense an audio source 22. The microphone array 20 comprises a sufficient number of microphones arranged in a suitable pattern to sense a direction 24, comprising both an azimuth angle 26 and an altitude angle 30, of the audio source 22 in relation to a frame of reference 32. The microphones may comprises any combination of individually-directional microphones and/or omnidirectional microphones to serve the aforementioned purpose. In this patent application, the term "audio" should be construed to be inclusive of acoustic pressure waves.
An audio processor 34 is responsive to the microphone array 20 to determine the direction 24, comprising both the azimuth angle 26 and the altitude angle 30, of the audio source 22. The audio processor 34 outputs one or more signals 36 indicative of the direction 24. For example, the audio processor 34 may generate a first signal indicating the azimuth angle and a second signal indicating the altitude angle. Alternatively, other quantities based on the azimuth
I angle and the altitude angle may be outputted by the audio processor 34. The audio processor 34 outputs an audio signal 38 as sensed by the microphone array 20. The audio processor 34 may process various channels from the microphone array 20 to effectively beam-steer and/or modify a beam width of the microphone array 20 toward the audio source 22. The audio processor 34 may further perform noise reduction acts in generating the audio signal 38.
The apparatus further comprises a wide-angle digital camera system 40. The wide-angle digital camera system 40 has a field of view 42 greater than 50 degrees, and more preferably, greater than 120 degrees. In exemplary embodiments, the field of view 42 ranges from at least 180 degrees to about 360 degrees. The wide-angle digital camera system 40 may include an optical element such as a fisheye lens which facilitates all objects in the field of view 42 being substantially in focus. However, many other wide-angle lenses using either traditional optics or holographic elements are also suitable for this application. Alternatively, the wide-angle digital camera system 40 may comprise a convex mirror to provide the wide-angle field of view 42.
The wide-angle digital camera system 40 captures at least one, and preferably a sequence of wide-angle images . The wide-angle images include images of the audio source 22. Depending on its location, the audio source 22 may be located anywhere within the wide-angle images . An image processor 44 is responsive to the audio processor 34 and the wide-angle digital camera system 40. The image processor 44 processes one or more wide-angle images to generate at least one, and preferably a sequence of perspective corrected images 46 in the direction 24 of the audio source 22. The image processor 44 selects a portion of the wide-angle images based on the direction signals 36 so that the audio source 22 is about centered therein, and corrects the distortion introduced by the wide-angle optical element (s) for the portion. Thus, the perspective corrected images 46 include an image of the audio source 22 about centered therein regardless of the azimuth angle 26 and the altitude angle 30. The perspective corrected images 46 may be outputted either to a display device for viewing same, to a mass storage device for storing same, or to a transmitter for remote viewing or storage.
The audio processor 34 may determine the direction 24 of a greatest local amplitude in a particular audio band. For teleconferencing applications, the particular audio band may comprise a human voice band. Considering the audio source 22 to be a human voice source, for example, the audio processor 34 filters signals from the microphone array 20 to attenuate non-human-voice audio sources 50 (e.g. an air conditioning system) with respect to the audio source 22. Thus, even if the non-human- voice audio sources 50 have a greater amplitude than the audio source 22, the greatest amplitude in the particular audio band would correspond to the audio source 22. Either in addition to or as an alternative to the aforementioned direction-determining approach, the audio processor 34 may determine the direction 22 based on a limited-duration audio event. Examples of limited- duration audio events include, but are' not limited to, a gun shot, glass breaking and a door being battered. In these and other cases, the image processor 44 may process the wide-angle images to generate the perspective corrected images 46 in the direction 24 after the limited-duration audio event has ended. Limited-duration audio events are typical in security applications.
In addition to determining the direction 24, the audio processor 34 may compare a profile of the audio source 22 to a pre-stored profile. The comparison may be performed in a time domain and/or a frequency domain. Preferably, a wavetable lookup is performed to compare the profile of the audio source 22 to a plurality of pre- stored profiles. If the profile of the audio source 22 sufficiently matches one of the pre-stored profiles, the audio processor 34 may initiate an action such as transmitting an alarm signal. The alarm signal augments the perspective corrected image 46 corresponding to the direction 24 of the audio source 22. The use of profile comparisons is well-suited for security applications, wherein a gun shot profile, a glass-breaking profile, and other security event profiles are pre-stored.
Profile comparisons may be either inclusionary or exclusionary in nature. For an inclusionary pre-stored profile, the action is initiated if the profile sufficiently matches the pre-stored profile. For an exclusionary pre-stored profile, the action is inhibited if the profile sufficiently matches the pre-stored profile. The use of exclusionary pre-stored profiles is beneficial to mitigate occurrences of false alarms. For example, if a specific sound, such as thunder associated with a lighting bolt, causes an undesired initiation of the alarm, a user may actuate an input device (e.g. depress a button) to indicate that the specific sound should be stored as an exclusionary pre-stored profile. As a result, subsequent thunder events would not initiate the alarm.
The microphone array 20, the audio processor 34, the wide-angle digital camera system 40 and the image processor 44 may be housed in a single unit. Alternatively, the microphone array 20 and the wide-angle digital camera system 40 are collocated in a capture unit, and the audio processor 34 and the image processor 44 are collocated in a processing unit. In this case, the capture unit may comprise a wireless transmitter and the processor unit may comprise a wireless receiver. The transmitter and receiver provide a wireless link to transmit audio signals from the microphone array 20 to the audio processor 34, and wide-angle image signals from the wide-angle digital camera system 40 to the image processor 44.
FIG. 2 is a block diagram of another embodiment of an immersive audio/video apparatus. The apparatus comprises a wide-angle digital camera system 60, such as the wide-angle digital camera system 40, and a microphone array 62, such as the microphone array 20. An image processor 64 processes one or more wide-angle images from the wide-angle digital camera system 60 to generate one or more perspective corrected images 66. The portion of the wide-angle images used to define the perspective corrected images is defined by a plurality of parameters. Examples of the parameters include a pan parameter 70, a tilt parameter 72, and a zoom parameter 74. The center of the portion is defined by the pan parameter 70 and the tilt parameter 72. The pan parameter 70 indicates an angle 75 along a first plane, such as a horizontal plane, and the tilt parameter 72 indicates an angle 76 along a second plane, such as a vertical plane. The width of the portion is defined by the zoom parameter 74. The parameters may be provided by a user interface, or by the output of a processor. A user, such as either a content director or a viewer, adjusts the parameters using the user interface. A content director can use the apparatus to create content such as movies, sporting event content and theater event content.
An audio processor 78 is responsive to the microphone array 62 to modify a directionality of the microphone array 62 to correspond to the portion of the wide-angle image defined by the parameters. The directionality may be modified based on the pan parameter 70 and the tilt parameter 72. The audio processor 78 may further cooperate with the image processor 64 to effectively modify a beam width of the microphone array 62 based on the zoom parameter 74. Consider an object 80, which may be a window in security applications or a human in teleconferencing applications, within a field of view 82 of the wide-angle digital camera system 60. The pan parameter 70 and the tilt parameter 72 may be provided to center the object 80 within the perspective corrected images 66. The zoom parameter 74 may be provided to exclude other objects 84 and 86 from the perspective corrected images 66.
Using the pan parameter 70 and the tilt parameter 72, the audio processor 78 processes signals from the microphone array 62 to effectively steer toward the object 80. Using the zoom parameter 74, the audio processor 78 may process signals from the microphone array 62 to vary a beam width about the object 80. Thus, the audio processor 78 produces an audio output 90 which senses audio produced at or near the object 80.
Similar to the apparatus described with reference to FIG. 1, the elements described with reference to FIG. 2 may be contained in a single unit, or in capture and processing units having a wireless link therebetween. FIG. 3 is an illustration of an embodiment of an apparatus of either FIG. 1 or FIG. 2. The apparatus comprises a housing 100 having a base 102 and a dome- shaped portion 104. The base 102 is suited for support by or mounting to a flat surface such as a table top, a wall, or a ceiling. The dome-shaped portion 104 may be substantially semispherical or have an alternative substantially convex form. As used herein, the term semispherical is defined as any portion of a sphere, including but not limited to a hemisphere and an entire sphere. Substantially semispherical forms include those that piecewise approximate a semisphere.
The microphone array comprises a plurality of microphones 106 disposed in a semispherical pattern about the dome-shaped portion 104. The microphones 106 may be arranged in accordance with a triangular or hexagonal packing distribution, wherein each microphone is centered within a corresponding one of a plurality of spherical triangles or hexagons. The housing 100 houses and/or supports the wide- angle digital camera system. The wide-angle digital camera system has a hemispherical field of view emanating about at a peak 110 of the dome-shaped portion 104. The housing 100 may further house the wireless transmitter described with reference to FIG. 1, or the audio processor (34 or 76) and the image processor (44 or 64).
Incorporating the functionality of FIG. 1, the embodiment of FIG. 3 is capable of detecting an audio source 112 anywhere within the hemispherical field of view, determining the direction of the audio source, and generating a perspective corrected image sequence of the audio source. Incorporating the functionality of FIG. 2, the embodiment of FIG. 3 is capable of panning and zooming wide-angle images to a specific target anywhere within the hemispherical field of view, and automatically having the audio output track the specific target. FIG. 4 illustrates use of an embodiment of an immersive audio/video apparatus in a teleconferencing application. At one location 150, a capture unit 152, such as the one shown in FIG. 3, is preferably mounted overhead of a first person 156, a second person 158 and a third person 160. The capture unit 152 may be mounted to a ceiling by an extendible/retractable member (not specifically illustrated) such as a telescoping member. Using the member, the capture unit 152 can be deployed down to nearly head level when being used, and returned up toward the ceiling when not being used for a teleconference (but possibly being used for a security application) . As an alternative to overhead mounting, a capture unit 152' may be placed on a table 154. For purposes of illustration and example, the first person 156 is standing by the table 154, and the second person 158 and the third person 160 are seated at the table 154. The capture unit 152 wirelessly communicates a plurality of audio signals and a sequence of wide-angle images having a hemispherical field of view to a processing unit 162. During the course of the teleconference, the processing unit 162 detects the directions of the persons 156, 158 and 160 with respect to the capture unit 152. The processing unit 162 outputs three perspective-corrected image sequences: a first sequence of the person 156, a second sequence of the person 158 and a third sequence of the person 160. The processing unit 162 communicates the image sequences, along with the sensed audio, to a computer network 164. Examples of the computer network 164 include, but are not limited to, an internet, an intranet or an extranet.
At another location 170, a fourth person (not illustrated) is seated at his/her personal computer 174. The computer 174 receives the image sequences and the audio via the computer network 164. The computer 174 includes a display 176 which simultaneously displays the three image sequences in three display portions 180, 182 and 184. The display portions 180, 182 and 184 may comprise windows, panes, or alternative means of display segmentation . Even though the three persons 156, 158 and 160 have significantly different distances below the capture unit 152, each person's image is centered within his/her corresponding image sequence since the units 152 and 162 are capable of locating audio sources with at least two degrees of freedom. To reduce background noise, the processing unit 162 may steer the microphone array toward one or more persons who are speaking at the time.
To reduce bandwidth requirements, the processing unit 162 may transmit the different perspective corrected image sequences using different frames rates. A higher frame rate is used for a speaking participant in contrast to a non-speaking participant, as sensed by the microphone array. Image sequences of speaking participants may be transmitted in a video mode of greater than or equal to 15 frames per second, for example. Image sequences of non-speaking participants may comprise still images which are transmitted at a significantly slower rate. The still images may be periodically refreshed based on a time constant and/or movement detected visually using the processing unit 162.
Optionally, image' mapping techniques such as face detection may be used to sense the location of the persons 156, 158, and 160 at all times during the call. Each person's face may be substantially centered within an image stream using the results of the image mapping. Image mapping may comprise visually determining one or more persons who are speaking. Image mapping may be used to track persons while they are not speaking. To reduce background noise, the processing unit 162 may steer the microphone array toward one or more persons who are speaking at the time.
The capture unit 152 can be made to mount anywhere due to its size and the inclusion of a one-way wireless link to a processing unit. Since all of the audio and video processing is performed in the processing unit 162, the capture unit 152 serves its purpose by transmitting a continuous stream of audio from each microphone channel and wide-angle video. The wireless link may comprise a BLUETOOTH link, an 802.11(b) link, a wireless telephone link, or any other secure or non-secure link depending on the specific application.
The ceiling mount or other overhead orientation of the capture unit 152 allows the center of the camera to be used as a document camera. A higher density of pixels in the center is used to resolve the fine detail required to transmit an image of a printed document. For example, the capture unit 152 and the processing unit 162 may cooperate to provide one or more perspective corrected images of a hard copy document 186 on the table 154. The display 176 displays the one or more images in a display region 190.
A more detailed description of various embodiments of the microphone arrays (20 and 62) is provided hereinafter. In a fully spherical microphone array application, microphones are placed in diametrically- opposed positions equally spaced about a sphere. The microphones are positioned both equidistantly and symmetrically about each individual microphone. All microphones have the same arrangement of microphones around them, i.e. there is not one number of microphones immediately surrounding some locations and a different number of microphones immediately surrounding another location.
Certain three-dimensional geometric figures approximate a sphere in such a way at either the center of their faces or their vertices. The simplest ones of these figures include the tetrahedron and the cube. However, these two figures have an insufficient microphone density to allow adequate zooming of the microphone beam. Figures such as the dodecahedron, the icosahedron, and the truncated icosahedron follow the prescribed location rules and allow for robust compound microphone creation.
In the spherical case, there are 2n microphones in the system, where n is an integer greater than zero. This combined with directional cardioid microphones at each face or vertex allows for the creation of definable main beam widths with nearly nonexistent side lobes. This is possible because a summation of opposing microphones creates an omnidirectional microphone, and a difference of said microphones creates an acoustic dipole. These compound omnidirectional and dipole microphones are used as building blocks for higher-order compound microphones used in the localized playback of the system. When a sufficient number of microphones is used in such a system, a beam can be formed in software that not only has significant reduction outside of its bounds, but also can maintain a constant beam width while being steered at any angle between neighboring microphones. Thus, the entire sphere can be covered with equal precision and reduction in acoustic signals emanating from sources outside of its beamwidth.
The aforementioned orientations of microphones on a sphere allow for a higher-order compound microphone that can be defined as a relationship of the difference of two on-axis microphones times the nearest on-axis microphone, multiplied by the same relation for each of the nearest equidistant microphone pairs. In the case of a two- dimensional circular array (an example of which being shown in FIG. 5), this expression reduces to ml (cl*ml- rtι2) *m3 (c2*m3-m4) *m5 (c3*m5-m6) , where ml to m8 represent eight microphone elements, and en are constants that determine the direction of the beam relative to an axis 200 defined through microphones ml and m2. The first compound element comprised of the ml and m.2 microphone elements is a variation of a second-order cardioid. The second term, which is comprised of the elements m.3, m4, m.5 and m6, are the closest surrounding pairs. If one wishes to further increase the order of the compound microphone, the next closest sets of pairs would be included with their sets of coefficients en until the order of the array is reached. In this way, the zoom function of the microphone array may be practiced. The lowest order zoom function is a cardioid microphone closest to the source. The next level is a second-order modified cardioid directed at the source. The next level is an order involving all of the adjacent microphone pairs as shown above for the two-dimensional circular array. This process may be continued using expanding layers of equidistant microphones until a desired level of isolation is achieved.
FIG. 6 shows an example of microphones ml' to mδ' located at vertices of a truncated icosahedron whose edges are all the same size (e.g. a bucky ball) . For this configuration, the form of the higher-order compound beaming function is defined as follows: ml' (cl'*ml'~ m2')*m3' (c2' *m3' -m4' ) *m5' (c3' *m5' -m6' ) *m7' (c4'*m7'-m8' ) . In the case of a truncated icosahedron, the first adjacent ring of equidistant microphones contains three microphone pairs. The second ring of nearly equidistant microphones would contain 6 pairs, and so on. The variation of the coefficients en' effectively steers the beam to any angle in altitude or azimuth with nearly constant beamwidth given the proper values of the en' s and using the closest microphone as ml. An implementation of this type of system using a half sphere would incorporate half the microphones used in the full sphere plus one additional omnidirectional microphone. The same placement rules are used for the half sphere as in the full sphere. With the addition of the single omnidirectional microphone, the same level of processing is available for beam direction and manipulation. An equivalent dipole microphone can be provided by subtracting an individual cardioid from the omnidirectional microphone. The same series of caridioid times dipole is possible by merely changing the series to ml* (cl*m0-ml) *m3 (c2*m0-m3) *m5 (c3*m0-m5) *m7* (c4*m0-m7) , where mO is the omnidirectional microphone.
The array can also be reduced to two or more rings of microphones mounted around the base of the camera and processed similar to the two-dimensional array in FIG. 5 except in azimuth and a small arc of altitude. This technique has a limited range of vertical steering, but maintains the horizontal range and precision. An example of such an array of coaxial and non-concentric rings is shown in FIG. 7. The microphone pairs are defined by matching a microphone 210 on a top ring 212 of the unit with a diametrically opposed microphone 214 on a bottom ring 216. If the array consists of an odd number of rings, a pair of diametrically opposed microphones 220 and 222 in a center ring 224 are employed.
Automatic acoustic-based steering of the microphone array 20 and wide-angle digital camera system 40 in FIG. 1 may be accomplished by first examining a frequency- band-limited amplitude of each of a series of compound microphones whose beam axis lies on an axis through each microphone capsule, and whose beam width is equal to an angular distance between an on-axis microphone and a nearest neighbor microphone. This beam can be achieved by combining signals produced by an on-axis microphone pair and a closest ring of accompanying microphone pairs. This process mitigates, and preferably eliminates, the possibility of false images due to microphone overlap as previously discussed. The next step includes comparing the output of several newly-created virtual compound microphones spaced within an area of the original compound beam. Each of the resulting beams have the same beam width as the original compound beam, thus allowing overlap between the new beams. Once the audio source 22 is known to be within the initial beam, the overlap of subsequent beams can be used to very accurately locate the audio source 22 within the solid angle of the original beam. Once the audio source 22 is located, the beam can be narrowed by including the next closest ring of equidistant microphones. This iterative process occurs over time, resulting in a reduced initial computation time and a visual and audible zooming on a subject as he/she speaks.
By including an automatic gain control circuit or subroutine which follows the audio processing, the effect of the audible zoom is to reduce other audible noise while the speaker's voice level remains about constant. The audio zoom process proceeds as described earlier by beginning with the cardioid signal closest to the audio source 22, switching to the second-order cardioid, and then to higher-order steered beams aimed at the audio source 22 as time progresses.
The video follows a similar zooming process, as illustrated in FIG. 8. The image processor 44 initially generates a perspective corrected image sequence of a quadrant 240 which includes an audio source (e.g. a human 242 that is speaking) . Gradually, the image processor 44 generates a perspective corrected image sequence of a smaller portion 244 which includes the human 242. Thereafter, the image processor 44 generates a perspective corrected image sequence of an even smaller portion 246 which provides a head-and-shoulder shot of the human 242. The gradual, coordinated zooming of the audio and video signals act to reduce a so-called "popcorn" effect of switching between two very different zoomed-in audio and video sources, especially if the two sources are physically near each other.
An alternative implementation of the auto-tracking feature comprises using the first step of the above- described audio location method to find a general location of the subject. Referring to FIG. 8, the general location of the human 242 is determined to be within the portion 244. Center coordinates of the general location are communicated to the image processor 44. A video mapping technique is used to identify all the possible audio sources within the general location. In this example, the human 242 and a non-speaking human 250 are possible audio sources within the general location indicated by the portion 244. Coordinates of these possible sources are fed back to the audio processor 34. The audio processor 34 determines which of the potential sources is speaking using virtual compound microphones directed at the potential sources . Once the audio source is identified, the audio processor 34 sends the coordinates of the audio source to the image processor 64. The audio processor 34 also manipulates the incoming audio data stream to focus the beam of the microphone array 62 on the coordinates of the head of the human 242. This process utilizes a gradual zooming technique as described above.
Embodiments of the herein-disclosed inventions may be used in a variety of applications. Examples include, but are not limited to, teleconferencing applications, security applications, and automotive applications. In automotive applications, the capture unit may be mounted within a cabin of an automobile. The capture unit is mounted to a ceiling in the cabin, and located to obtain wide-angle images which include a driver, a front passenger, and any rear passengers. Any individual in the automobile may use the apparatus to place calls. Audio beam steering toward the speaking individual is beneficial to reduce background noise. In security and other applications, the capture unit may be autonomously mobile. For example, the capture unit may be mounted to a movable robot for an airport security application.
It will be apparent to those skilled in the art that the disclosed inventions may be modified in numerous ways and may assume many embodiments other than the preferred forms specifically set out and described herein. For example, in contrast to a three-dimensional pattern, the microphones in the microphone array may be arranged in a two-dimensional pattern such as one shown in FIG. 5. In this case, microphone array may comprise a ring of microphones disposed around the base of the capture unit. This configuration would allow precise positioning of the transmitting audio source in the azimuth angle, but would not discriminate to the same extent in the altitude angle. Further, the wide-angle digital camera system may be sensitive to non-visible light, such as infrared light, in contrast to visible light. Still further, the wide-angle digital camera system may have a low-light mode to capture images with a low level of lighting. Yet still further, the herein-described profile comparisons may be used to automatically recognize a person's voice. Upon recognizing a person's voice, textual and/or graphical information indicating the person' s name, title, company, and/or affiliation may be included as a caption to his/her images.
As an alternative to displaying images of a hard copy document in the display region 190, computer- generated images may be displayed in the display region 190. For example, a word processing document may be shown in the display region 190 for collaborative work by the participants. Alternatively, computer-generated presentation slides may be displayed in the display region 190. Other collaborative computing applications are also enabled using the display region 190. The herein-disclosed capture units may be powered in various ways, including but not limited to, mains power, a rechargeable or non-rechargeable battery, solar power or wind-up power. The herein-disclosed processing units may be either integrated with or interfaced to a wireless mobile telephone, a set-top box, a cable modem, or a general purpose computer, to remotely communicate images and audio. Alternatively, the herein-disclosed processing units may be integrated with a circuit card that interfaces with either a wireless mobile telephone, a set-top box, a cable modem, or a general purpose computer, to remotely communicate images and audio. Similarly, the images and audio generated by the processing unit may be remotely received by a wireless mobile telephone, a set-top box, a cable modem, or a general purpose computer.
Accordingly, it is intended by the appended claims to cover all modifications which fall within the true spirit and scope of the present invention.
What is claimed is:

Claims

1. An apparatus comprising:
a microphone array to sense an audio source;
an audio processor responsive to the microphone array to determine a direction of the audio source in relation to a frame of reference, the direction comprising an azimuth angle and an altitude angle;
a wide-angle digital camera system; and
an image processor responsive to the audio processor and the wide-angle digital camera system, the image processor to process at least one wide-angle image from the wide-angle digital camera system to generate at least one perspective corrected image in the direction of the audio source.
2. The apparatus of claim 1 further comprising a housing having a base and a dome-shaped portion, wherein the wide-angle digital camera system has a field of view emanating about at a peak of the dome- shaped portion.
3. The apparatus of claim 2 wherein microphone array comprises a plurality of microphones disposed about the dome-shaped portion.
. The apparatus of claim 1 wherein the microphone array comprises a plurality of microphones disposed in a substantially semispherical three-dimensional pattern.
5. The apparatus of claim 1 wherein the microphone array comprises a first ring of microphones and at least a second ring of microphones, the first ring coaxial to and non-concentric with the second ring.
6. An apparatus comprising:
a housing having a dome-shaped portion;
a microphone array comprising a plurality of microphones disposed about the dome-shaped portion; and
a wide-angle digital camera system supported by the housing.
7. The apparatus of claim 6 wherein the wide-angle digital camera system has a field of view emanating about at a peak of the dome-shaped portion.
8. The apparatus of claim 6 wherein the plurality of microphones are disposed in a substantially semispherical three-dimensional pattern.
9. The apparatus of claim 6 wherein the microphone , array comprises a ring of microphones.
10. The apparatus of claim 6 wherein the microphone array comprises a first ring of microphones and a second ring of microphones, the first ring coaxial to and non-concentric with the second ring.
11. The apparatus of claim 6 further comprising: an audio processor responsive to the microphone array and housed by the housing; and an image processor responsive to the audio processor and housed by the housing.
PCT/US2003/002235 2002-02-27 2003-01-27 Apparatus having cooperating wide-angle digital camera system and microphone array WO2004114644A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2003304231A AU2003304231A1 (en) 2002-02-27 2003-01-27 Apparatus having cooperating wide-angle digital camera system and microphone array

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/083,912 2002-02-27
US10/083,912 US20030160862A1 (en) 2002-02-27 2002-02-27 Apparatus having cooperating wide-angle digital camera system and microphone array

Publications (2)

Publication Number Publication Date
WO2004114644A2 true WO2004114644A2 (en) 2004-12-29
WO2004114644A3 WO2004114644A3 (en) 2005-03-17

Family

ID=27753385

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2003/002235 WO2004114644A2 (en) 2002-02-27 2003-01-27 Apparatus having cooperating wide-angle digital camera system and microphone array

Country Status (3)

Country Link
US (1) US20030160862A1 (en)
AU (1) AU2003304231A1 (en)
WO (1) WO2004114644A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8046219B2 (en) 2007-10-18 2011-10-25 Motorola Mobility, Inc. Robust two microphone noise suppression system

Families Citing this family (152)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030220971A1 (en) * 2002-05-23 2003-11-27 International Business Machines Corporation Method and apparatus for video conferencing with audio redirection within a 360 degree view
AU2003274445A1 (en) * 2002-06-11 2003-12-22 Sony Electronics Inc. Microphone array with time-frequency source discrimination
US7161579B2 (en) 2002-07-18 2007-01-09 Sony Computer Entertainment Inc. Hand-held computer interactive device
US8797260B2 (en) * 2002-07-27 2014-08-05 Sony Computer Entertainment Inc. Inertially trackable hand-held controller
US7613310B2 (en) * 2003-08-27 2009-11-03 Sony Computer Entertainment Inc. Audio input system
US7623115B2 (en) 2002-07-27 2009-11-24 Sony Computer Entertainment Inc. Method and apparatus for light input device
US8947347B2 (en) * 2003-08-27 2015-02-03 Sony Computer Entertainment Inc. Controlling actions in a video game unit
US8073157B2 (en) 2003-08-27 2011-12-06 Sony Computer Entertainment Inc. Methods and apparatus for targeted sound detection and characterization
US7545926B2 (en) * 2006-05-04 2009-06-09 Sony Computer Entertainment Inc. Echo and noise cancellation
US7970147B2 (en) * 2004-04-07 2011-06-28 Sony Computer Entertainment Inc. Video game controller with noise canceling logic
US7697700B2 (en) 2006-05-04 2010-04-13 Sony Computer Entertainment Inc. Noise removal for electronic device with far field microphone on console
US7646372B2 (en) 2003-09-15 2010-01-12 Sony Computer Entertainment Inc. Methods and systems for enabling direction detection when interfacing with a computer program
US7883415B2 (en) 2003-09-15 2011-02-08 Sony Computer Entertainment Inc. Method and apparatus for adjusting a view of a scene being displayed according to tracked head motion
US7809145B2 (en) * 2006-05-04 2010-10-05 Sony Computer Entertainment Inc. Ultra small microphone array
US7783061B2 (en) * 2003-08-27 2010-08-24 Sony Computer Entertainment Inc. Methods and apparatus for the targeted sound detection
US7850526B2 (en) * 2002-07-27 2010-12-14 Sony Computer Entertainment America Inc. System for tracking user manipulations within an environment
US8570378B2 (en) 2002-07-27 2013-10-29 Sony Computer Entertainment Inc. Method and apparatus for tracking three-dimensional movements of an object using a depth sensing camera
US9393487B2 (en) 2002-07-27 2016-07-19 Sony Interactive Entertainment Inc. Method for mapping movements of a hand-held controller to game commands
US7627139B2 (en) * 2002-07-27 2009-12-01 Sony Computer Entertainment Inc. Computer image and audio processing of intensity and input devices for interfacing with a computer program
US9174119B2 (en) 2002-07-27 2015-11-03 Sony Computer Entertainement America, LLC Controller for providing inputs to control execution of a program when inputs are combined
US7803050B2 (en) * 2002-07-27 2010-09-28 Sony Computer Entertainment Inc. Tracking device with sound emitter for use in obtaining information for controlling game program execution
US10086282B2 (en) * 2002-07-27 2018-10-02 Sony Interactive Entertainment Inc. Tracking device for use in obtaining information for controlling game program execution
US8139793B2 (en) 2003-08-27 2012-03-20 Sony Computer Entertainment Inc. Methods and apparatus for capturing audio signals based on a visual image
US7918733B2 (en) * 2002-07-27 2011-04-05 Sony Computer Entertainment America Inc. Multi-input game control mixer
US9474968B2 (en) 2002-07-27 2016-10-25 Sony Interactive Entertainment America Llc Method and system for applying gearing effects to visual tracking
US8233642B2 (en) * 2003-08-27 2012-07-31 Sony Computer Entertainment Inc. Methods and apparatuses for capturing an audio signal based on a location of the signal
US7854655B2 (en) * 2002-07-27 2010-12-21 Sony Computer Entertainment America Inc. Obtaining input for controlling execution of a game program
US8313380B2 (en) * 2002-07-27 2012-11-20 Sony Computer Entertainment America Llc Scheme for translating movements of a hand-held controller into inputs for a system
US8160269B2 (en) 2003-08-27 2012-04-17 Sony Computer Entertainment Inc. Methods and apparatuses for adjusting a listening area for capturing sounds
US7760248B2 (en) 2002-07-27 2010-07-20 Sony Computer Entertainment Inc. Selective sound source listening in conjunction with computer interactive processing
US8686939B2 (en) 2002-07-27 2014-04-01 Sony Computer Entertainment Inc. System, method, and apparatus for three-dimensional input control
US9682319B2 (en) 2002-07-31 2017-06-20 Sony Interactive Entertainment Inc. Combiner method for altering game gearing
US9177387B2 (en) 2003-02-11 2015-11-03 Sony Computer Entertainment Inc. Method and apparatus for real time motion capture
US8072470B2 (en) 2003-05-29 2011-12-06 Sony Computer Entertainment Inc. System and method for providing a real-time three-dimensional interactive environment
US20070223732A1 (en) * 2003-08-27 2007-09-27 Mao Xiao D Methods and apparatuses for adjusting a visual image based on an audio signal
JP2005086365A (en) * 2003-09-05 2005-03-31 Sony Corp Talking unit, conference apparatus, and photographing condition adjustment method
US7874917B2 (en) * 2003-09-15 2011-01-25 Sony Computer Entertainment Inc. Methods and systems for enabling depth and direction detection when interfacing with a computer program
US9573056B2 (en) 2005-10-26 2017-02-21 Sony Interactive Entertainment Inc. Expandable control device via hardware attachment
US8323106B2 (en) 2008-05-30 2012-12-04 Sony Computer Entertainment America Llc Determination of controller three-dimensional location using image analysis and ultrasonic communication
US8287373B2 (en) 2008-12-05 2012-10-16 Sony Computer Entertainment Inc. Control device for communicating visual information
US10279254B2 (en) 2005-10-26 2019-05-07 Sony Interactive Entertainment Inc. Controller having visually trackable object for interfacing with a gaming system
JP4269883B2 (en) * 2003-10-20 2009-05-27 ソニー株式会社 Microphone device, playback device, and imaging device
FR2861525B1 (en) * 2003-10-24 2006-04-28 Winlight System Finance METHOD AND DEVICE FOR CAPTURING A LARGE FIELD IMAGE AND A REGION OF INTEREST THEREOF
US7663689B2 (en) 2004-01-16 2010-02-16 Sony Computer Entertainment Inc. Method and apparatus for optimizing capture device settings through depth information
US7629995B2 (en) * 2004-08-06 2009-12-08 Sony Corporation System and method for correlating camera views
US8547401B2 (en) 2004-08-19 2013-10-01 Sony Computer Entertainment Inc. Portable augmented reality device and method
US7660428B2 (en) * 2004-10-25 2010-02-09 Polycom, Inc. Ceiling microphone assembly
US8873768B2 (en) 2004-12-23 2014-10-28 Motorola Mobility Llc Method and apparatus for audio signal enhancement
US7936894B2 (en) * 2004-12-23 2011-05-03 Motorola Mobility, Inc. Multielement microphone
JP2006197115A (en) * 2005-01-12 2006-07-27 Fuji Photo Film Co Ltd Imaging device and image output device
WO2006079951A1 (en) * 2005-01-25 2006-08-03 Koninklijke Philips Electronics N.V. Mobile telecommunications device
US7812855B2 (en) * 2005-02-18 2010-10-12 Honeywell International Inc. Glassbreak noise detector and video positioning locator
US7301497B2 (en) * 2005-04-05 2007-11-27 Eastman Kodak Company Stereo display for position sensing systems
JP4886770B2 (en) * 2005-05-05 2012-02-29 株式会社ソニー・コンピュータエンタテインメント Selective sound source listening for use with computer interactive processing
WO2007007340A2 (en) * 2005-07-13 2007-01-18 O.D.F. Optronics Ltd. Observation system
US7864210B2 (en) * 2005-11-18 2011-01-04 International Business Machines Corporation System and methods for video conferencing
WO2007138617A1 (en) * 2006-05-25 2007-12-06 Asdsp S.R.L. Video camera for desktop videocommunication
US7542668B2 (en) * 2006-06-30 2009-06-02 Opt Corporation Photographic device
USRE48417E1 (en) 2006-09-28 2021-02-02 Sony Interactive Entertainment Inc. Object direction using video input combined with tilt angle information
US8781151B2 (en) 2006-09-28 2014-07-15 Sony Computer Entertainment Inc. Object detection using video input combined with tilt angle information
US8310656B2 (en) * 2006-09-28 2012-11-13 Sony Computer Entertainment America Llc Mapping movements of a hand-held controller to the two-dimensional image plane of a display screen
US8229134B2 (en) * 2007-05-24 2012-07-24 University Of Maryland Audio camera using microphone arrays for real time capture of audio images and method for jointly processing the audio images with video images
US9134904B2 (en) * 2007-10-06 2015-09-15 International Business Machines Corporation Displaying documents to a plurality of users of a surface computer
US8139036B2 (en) * 2007-10-07 2012-03-20 International Business Machines Corporation Non-intrusive capture and display of objects based on contact locality
US20090091539A1 (en) * 2007-10-08 2009-04-09 International Business Machines Corporation Sending A Document For Display To A User Of A Surface Computer
US20090091529A1 (en) * 2007-10-09 2009-04-09 International Business Machines Corporation Rendering Display Content On A Floor Surface Of A Surface Computer
US8024185B2 (en) * 2007-10-10 2011-09-20 International Business Machines Corporation Vocal command directives to compose dynamic display text
US9203833B2 (en) * 2007-12-05 2015-12-01 International Business Machines Corporation User authorization using an automated Turing Test
US8542907B2 (en) 2007-12-17 2013-09-24 Sony Computer Entertainment America Llc Dynamic three-dimensional object mapping for user-defined control device
US8840470B2 (en) 2008-02-27 2014-09-23 Sony Computer Entertainment America Llc Methods for capturing depth data of a scene and applying computer actions
US8368753B2 (en) 2008-03-17 2013-02-05 Sony Computer Entertainment America Llc Controller with an integrated depth camera
US8314829B2 (en) * 2008-08-12 2012-11-20 Microsoft Corporation Satellite microphones for improved speaker detection and zoom
US8319858B2 (en) * 2008-10-31 2012-11-27 Fortemedia, Inc. Electronic apparatus and method for receiving sounds with auxiliary information from camera system
US8961313B2 (en) 2009-05-29 2015-02-24 Sony Computer Entertainment America Llc Multi-positional three-dimensional controller
US8650634B2 (en) * 2009-01-14 2014-02-11 International Business Machines Corporation Enabling access to a subset of data
US8527657B2 (en) 2009-03-20 2013-09-03 Sony Computer Entertainment America Llc Methods and systems for dynamically adjusting update rates in multi-player network gaming
US8342963B2 (en) 2009-04-10 2013-01-01 Sony Computer Entertainment America Inc. Methods and systems for enabling control of artificial intelligence game characters
US8393964B2 (en) 2009-05-08 2013-03-12 Sony Computer Entertainment America Llc Base station for position location
US8142288B2 (en) 2009-05-08 2012-03-27 Sony Computer Entertainment America Llc Base station movement detection and compensation
US8610924B2 (en) * 2009-11-24 2013-12-17 International Business Machines Corporation Scanning and capturing digital images using layer detection
US8441702B2 (en) * 2009-11-24 2013-05-14 International Business Machines Corporation Scanning and capturing digital images using residue detection
US20110122459A1 (en) * 2009-11-24 2011-05-26 International Business Machines Corporation Scanning and Capturing digital Images Using Document Characteristics Detection
US8300845B2 (en) 2010-06-23 2012-10-30 Motorola Mobility Llc Electronic apparatus having microphones with controllable front-side gain and rear-side gain
US8433076B2 (en) 2010-07-26 2013-04-30 Motorola Mobility Llc Electronic apparatus for generating beamformed audio signals with steerable nulls
EP2413115A1 (en) * 2010-07-30 2012-02-01 Technische Universiteit Eindhoven Generating a control signal based on acoustic data
US8789175B2 (en) * 2010-09-30 2014-07-22 Verizon Patent And Licensing Inc. Device security system
EP2448265A1 (en) 2010-10-26 2012-05-02 Google, Inc. Lip synchronization in a video conference
US8558894B2 (en) * 2010-11-16 2013-10-15 Hewlett-Packard Development Company, L.P. Support for audience interaction in presentations
CN102082906B (en) * 2011-01-11 2013-04-17 深圳一电科技有限公司 Hand-free high-definition digital camera
US9171551B2 (en) * 2011-01-14 2015-10-27 GM Global Technology Operations LLC Unified microphone pre-processing system and method
JP2012178807A (en) * 2011-02-28 2012-09-13 Sanyo Electric Co Ltd Imaging apparatus
US8743157B2 (en) 2011-07-14 2014-06-03 Motorola Mobility Llc Audio/visual electronic device having an integrated visual angular limitation device
US9210302B1 (en) 2011-08-10 2015-12-08 Google Inc. System, method and apparatus for multipoint video transmission
US8217945B1 (en) * 2011-09-02 2012-07-10 Metric Insights, Inc. Social annotation of a single evolving visual representation of a changing dataset
EP2629440B1 (en) * 2012-02-15 2016-02-10 Harman International Industries Ltd. Audio mixing console
US8917309B1 (en) 2012-03-08 2014-12-23 Google, Inc. Key frame distribution in video conferencing
US8791982B1 (en) 2012-06-27 2014-07-29 Google Inc. Video multicast engine
US9232310B2 (en) 2012-10-15 2016-01-05 Nokia Technologies Oy Methods, apparatuses and computer program products for facilitating directional audio capture with multiple microphones
JP2014143678A (en) * 2012-12-27 2014-08-07 Panasonic Corp Voice processing system and voice processing method
JP6253031B2 (en) * 2013-02-15 2017-12-27 パナソニックIpマネジメント株式会社 Calibration method
CN104113721B (en) * 2013-04-22 2017-08-18 华为技术有限公司 The display methods and device of conference materials in a kind of video conference
EP3950433A1 (en) * 2013-05-23 2022-02-09 NEC Corporation Speech processing system, speech processing method, speech processing program and vehicle including speech processing system on board
WO2015008538A1 (en) * 2013-07-19 2015-01-22 ソニー株式会社 Information processing device and information processing method
JP6524657B2 (en) * 2014-02-27 2019-06-05 株式会社リコー Conference equipment
KR20150102337A (en) * 2014-02-28 2015-09-07 삼성전자주식회사 Audio outputting apparatus, control method thereof and audio outputting system
CN106537471B (en) * 2014-03-27 2022-04-19 昕诺飞控股有限公司 Detection and notification of pressure waves by lighting units
US10222824B2 (en) * 2014-05-12 2019-03-05 Intel Corporation Dual display system
US9570113B2 (en) * 2014-07-03 2017-02-14 Gopro, Inc. Automatic generation of video and directional audio from spherical content
US9693009B2 (en) * 2014-09-12 2017-06-27 International Business Machines Corporation Sound source selection for aural interest
US9685730B2 (en) 2014-09-12 2017-06-20 Steelcase Inc. Floor power distribution system
US9473687B2 (en) * 2014-12-23 2016-10-18 Ebay Inc. Modifying image parameters using wearable device input
US10178374B2 (en) * 2015-04-03 2019-01-08 Microsoft Technology Licensing, Llc Depth imaging of a surrounding environment
US9609275B2 (en) 2015-07-08 2017-03-28 Google Inc. Single-stream transmission method for multi-user video conferencing
US10909384B2 (en) * 2015-07-14 2021-02-02 Panasonic Intellectual Property Management Co., Ltd. Monitoring system and monitoring method
US9788109B2 (en) 2015-09-09 2017-10-10 Microsoft Technology Licensing, Llc Microphone placement for sound source direction estimation
JP2017060029A (en) * 2015-09-17 2017-03-23 パナソニックIpマネジメント株式会社 Wearable camera system and recording control method
EP3151534A1 (en) * 2015-09-29 2017-04-05 Thomson Licensing Method of refocusing images captured by a plenoptic camera and audio based refocusing image system
EP3151535A1 (en) 2015-09-29 2017-04-05 Thomson Licensing Plenoptic camera having an array of sensors for generating digital images and method of capturing an image using a plenoptic camera
US10033928B1 (en) 2015-10-29 2018-07-24 Gopro, Inc. Apparatus and methods for rolling shutter compensation for multi-camera systems
JP6645129B2 (en) * 2015-11-04 2020-02-12 株式会社リコー Communication device, control method, and control program
US9792709B1 (en) 2015-11-23 2017-10-17 Gopro, Inc. Apparatus and methods for image alignment
US9973696B1 (en) 2015-11-23 2018-05-15 Gopro, Inc. Apparatus and methods for image alignment
US9848132B2 (en) 2015-11-24 2017-12-19 Gopro, Inc. Multi-camera time synchronization
WO2017119555A1 (en) * 2016-01-08 2017-07-13 Lg Electronics Inc. Portable camera
US9602795B1 (en) 2016-02-22 2017-03-21 Gopro, Inc. System and method for presenting and viewing a spherical video segment
US9973746B2 (en) 2016-02-17 2018-05-15 Gopro, Inc. System and method for presenting and viewing a spherical video segment
US9743060B1 (en) 2016-02-22 2017-08-22 Gopro, Inc. System and method for presenting and viewing a spherical video segment
US10250986B2 (en) * 2016-05-24 2019-04-02 Matthew Marrin Multichannel head-trackable microphone
US9922398B1 (en) 2016-06-30 2018-03-20 Gopro, Inc. Systems and methods for generating stabilized visual content using spherical visual content
EP3287868B1 (en) * 2016-08-26 2020-10-14 Nokia Technologies Oy Content discovery
US9934758B1 (en) 2016-09-21 2018-04-03 Gopro, Inc. Systems and methods for simulating adaptation of eyes to changes in lighting conditions
US10268896B1 (en) 2016-10-05 2019-04-23 Gopro, Inc. Systems and methods for determining video highlight based on conveyance positions of video content capture
US10043552B1 (en) 2016-10-08 2018-08-07 Gopro, Inc. Systems and methods for providing thumbnails for video content
US10684679B1 (en) 2016-10-21 2020-06-16 Gopro, Inc. Systems and methods for generating viewpoints for visual content based on gaze
US10785445B2 (en) * 2016-12-05 2020-09-22 Hewlett-Packard Development Company, L.P. Audiovisual transmissions adjustments via omnidirectional cameras
US10194101B1 (en) 2017-02-22 2019-01-29 Gopro, Inc. Systems and methods for rolling shutter compensation using iterative process
US11164606B2 (en) * 2017-06-30 2021-11-02 Qualcomm Incorporated Audio-driven viewport selection
US10469818B1 (en) 2017-07-11 2019-11-05 Gopro, Inc. Systems and methods for facilitating consumption of video content
US10447394B2 (en) * 2017-09-15 2019-10-15 Qualcomm Incorporated Connection with remote internet of things (IoT) device based on field of view of camera
CN109961781B (en) * 2017-12-22 2021-08-27 深圳市优必选科技有限公司 Robot-based voice information receiving method and system and terminal equipment
GB201800918D0 (en) * 2018-01-19 2018-03-07 Nokia Technologies Oy Associated spatial audio playback
US10587807B2 (en) 2018-05-18 2020-03-10 Gopro, Inc. Systems and methods for stabilizing videos
US10951859B2 (en) 2018-05-30 2021-03-16 Microsoft Technology Licensing, Llc Videoconferencing device and method
US10735882B2 (en) * 2018-05-31 2020-08-04 At&T Intellectual Property I, L.P. Method of audio-assisted field of view prediction for spherical video streaming
CN110767246B (en) * 2018-07-26 2022-08-02 深圳市优必选科技有限公司 Noise processing method and device and robot
US10750092B2 (en) 2018-09-19 2020-08-18 Gopro, Inc. Systems and methods for stabilizing videos
US11157738B2 (en) * 2018-11-30 2021-10-26 Cloudminds Robotics Co., Ltd. Audio-visual perception system and apparatus and robot system
US11463615B2 (en) 2019-03-13 2022-10-04 Panasonic Intellectual Property Management Co., Ltd. Imaging apparatus
US10904657B1 (en) * 2019-10-11 2021-01-26 Plantronics, Inc. Second-order gradient microphone system with baffles for teleconferencing
US11232796B2 (en) * 2019-10-14 2022-01-25 Meta Platforms, Inc. Voice activity detection using audio and visual analysis
US10917719B1 (en) * 2019-11-19 2021-02-09 Lijun Chen Method and device for positioning sound source by using fisheye lens
CN113129907B (en) * 2021-03-23 2022-08-23 中国科学院声学研究所 Automatic detection device and method for field bird singing

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5715319A (en) * 1996-05-30 1998-02-03 Picturetel Corporation Method and apparatus for steerable and endfire superdirective microphone arrays with reduced analog-to-digital converter and computational requirements

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5940118A (en) * 1997-12-22 1999-08-17 Nortel Networks Corporation System and method for steering directional microphones
JPH11331827A (en) * 1998-05-12 1999-11-30 Fujitsu Ltd Television camera

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5715319A (en) * 1996-05-30 1998-02-03 Picturetel Corporation Method and apparatus for steerable and endfire superdirective microphone arrays with reduced analog-to-digital converter and computational requirements

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8046219B2 (en) 2007-10-18 2011-10-25 Motorola Mobility, Inc. Robust two microphone noise suppression system

Also Published As

Publication number Publication date
AU2003304231A8 (en) 2005-01-04
AU2003304231A1 (en) 2005-01-04
US20030160862A1 (en) 2003-08-28
WO2004114644A3 (en) 2005-03-17

Similar Documents

Publication Publication Date Title
US20030160862A1 (en) Apparatus having cooperating wide-angle digital camera system and microphone array
US5940118A (en) System and method for steering directional microphones
EP2179586B1 (en) Method and system for automatic camera control
US10122972B2 (en) System and method for localizing a talker using audio and video information
EP1377041B1 (en) Integrated design for omni-directional camera and microphone array
US6005610A (en) Audio-visual object localization and tracking system and method therefor
EP2538236B1 (en) Automatic camera selection for videoconferencing
US7015954B1 (en) Automatic video system using multiple cameras
JPH11331827A (en) Television camera
US8571192B2 (en) Method and apparatus for improved matching of auditory space to visual space in video teleconferencing applications using window-based displays
US10490202B2 (en) Interference-free audio pickup in a video conference
US20220070371A1 (en) Merging webcam signals from multiple cameras
EP1377847A2 (en) Method and apparatus for audio/image speaker detection and locator
US20110050840A1 (en) Apparatus, system and method for video call
WO2015198964A1 (en) Imaging device provided with audio input/output function and videoconferencing system
JP2009049734A (en) Camera-mounted microphone and control program thereof, and video conference system
Fiala et al. A panoramic video and acoustic beamforming sensor for videoconferencing
Pingali et al. Audio-visual tracking for natural interactivity
JPH06276514A (en) Camera control system in video conference system
JP5653771B2 (en) Video display device and program
US20220382132A1 (en) Systems and methods for video camera systems for smart tv applications
JPH05153582A (en) Tv conference portrait camera turning system
JPH07193796A (en) Video communication system
Green et al. Panocam: Combining panoramic video with acoustic beamforming for videoconferencing

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP