Title of Invention

Knowledge acquisition system and processes

Abstract A system and method which utilise separate channels to provide learning related content to each ear. The sound must be delivered specifically to the correct ear, for example by headphones. The content in one form may be to deliver intellectual content to the left ear, and predominantly non-intellectual content such as music to the right ear. In another form the content in the right ear may be a time shifted version of the left ear content. Applicable especially to assist in training, pre-exam study and cramming.
Full Text FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
As amended by the Patents (Amendment) Act, 2005
&
The Patents Rules, 2003
As amended by the Patents (Amendment) Rules, 2005
COMPLETE SPECIFICATION
(See section 10 and rule 13)
TITLE OF THE INVENTION
Knowledge acquisition system and processes
INVENTOR
Name : WARD Bruce Winston
Nationality : Australian National
Address : 54 Alan Road, Berowra Heights, NSW 2082, Australia
APPLICANTS
Name : IP EQUITIES PTY LTD
Address : 54 Alan Road, Berowra Heights, NSW 2082, Australia Nationality : an Australian Company
PREAMBLE TO THE DESCRIPTION
The following specification particularly describes the nature of this invention and the manner in which it is to be performed :

KNOWLEDGE ACQUISITION SYSTEM AND PROCESSES
Technical Field
The present invention relates to systems and processes relating to enhancing specific aspects of learning.
Background art
Many devices and processes have been proposed over the past 50 years in order to
provide some enhancement or improvement in learning processes. One train of such processes purports to rely on neurophysiology, and in particular, certain aspects of the division of functions between the left and right hemispheres of the brain.
An example of this is so called phonics and similar systems, in which the retention of intellectual content is asserted to be enhanced by the simultaneous playing to both ears of certain types of music while learning. Another approach is so-called binaural wave training, and Lozanov accelerated learning which play identical sounds into both ears so as to attempt to bring the wave patterns in both hemispheres of the brain into synchrony and so attempt to promote knowledge acquisition.
Despite changes in teaching methods, there is still a need for students, whether at school, college, university or in training courses, to memorize material and to retain it in a
working state. Students need to revise material studied as part of their course, and prepare for exams. This typically involves revision, re-writing and re-reading of notes, attempts at past papers, cover and check memorisation, and similar processes. The process of preparing for examinations is often referred to as cramming. There appears to have been no systematic attempt to provide a technological aid for cramming and pre-examination preparation, despite the clear need for such assistance by students.
It is an object of the present invention to provide an arrangement in which the
learning of discrete information, particularly for cramming, training, exam study and similar purposes, can be enhanced.
2

Summary of the Invention
In a broad form, one aspect of the present invention relates to presenting information via a headset or similar arrangement to a user, in which the left and right ears are receiving entirely distinct infonnation. The discrete left and right ear signals are not in the form of stereo sound, or with the intention of creating some common auditory effect. In one form, the right ear receives preselected intellectual content, whilst the left ear receives non intellectual content, for example music . The left ear content may be mixed with aural tags or labels, or include some intellectual content. In other implementations the left side is fed only with aural tags arranged in a patterned way. The left and right ear signals are in each implementation distinct signals.
In one aspect, the present invention provides a system for assisting knowledge acquisition by a user, wherein audio data is presented via a separate left ear signal and right ear signal, wherein said right ear signal includes predominantly preselected intellectual content, and said left ear signal includes predominantly non intellectual content, and each ear is presented with only the channel intended for that ear.
In another aspect, the present invention provides a A method of processing information for use in a system for assisting knowledge acquisition by a user, said process including the steps of
Providing a set of content
Processing said content so as to produce a set of coaural data; and
Providing said coaural data to a user.
The present invention further provides an audio data set, adapted to be reproduced as a sound signal, the set including a separate left ear signal and right ear signal, wherein said right ear signal includes predominantly preselected intellectual content, and said left
ear signal includes predominantly non intellectual content.
In another aspect, the present invention provides a method of providing a processed audio file, including at least the steps of inputting, at a user location, text content; submitting said content to a remotely located server; processing said content to produce a corresponding audio file; and supplying said audio file.
3

Preferably, the content for each ear is generated by the desired information being processed to produce the two distinct sound channels.
It is theorised by the inventor that all intellectual information is processed by the brain's auditory systems, whether it is read or heard aloud. The brain processes, for example, a visually read word into a series of sounds, which are then recognised. It is well established that the different hemispheres of the brain process information in different and in some respects complimentary ways. In general terms, logical intellectual content is generally processed by the left-brain and intuitive, creative and emotional content by the right-brain.
It is further theorised by the inventor that the right and left brains when acquiring information to be learned by being either read or heard become distracted and so effectively unable to function cooperatively when content, particularly audible content, is boring, linear, monologic or monotonous.
It is the present inventor's contention that applying the proper sound stimulation to each hemisphere can assist in the acquisition of discrete information. The right ear is functionally connected to the left brain, so that intellectual information in the first instance (for example the names of the countries in South America) is supplied to the right ear. However, if the left ear is subjected to essentially the same stimulus, the right-brain may become distracted or more generally act to trigger a process to seek for more interesting input, and therefore detract from primary acquisition processing and effective recall of the information. It is further believed that the timing and pace of the stimulation should be varied to assist in this process.
Accordingly, by providing a suitable discrete and appropriate stimulus to each ear, especially non-linear or varied input, the distraction impulse is reduced, and so neural information processing and recall is improved.
It is important that the ears receive the intended content, and not a mixture of left
and right ear content delivered over, say, a speaker system in a room. The use of
4

headphones or similar devices is preferred, in order to achieve the desired separate content.
This form of audio content will be referred to as coaural. For the purposes of the specification and claims, coaural means discrete unmixed monoaural content suitable for separate delivery to the left and right ears.
Brief Description of Drawings
Various implementations of the present invention will now be described with reference to the accompanying figures, in which:
Figure 1 is a general block diagram of one form of the inventive system;
Figure 2 is more detailed block diagram providing more detail on the processing operations;
Figure 3 is a block diagram illustrating signal synthesis; and
Figure 4 is a timing graph.
Detailed Description of the Drawings
The present invention will be described with reference to various practical implementations. However, it will be appreciated that the present invention is capable of various implementations, and the present alternatives are intended to be illustrative and
not limiting.
The practical implementation in hardware of the present invention is most readily achieved using largely conventional audio systems. However, the present invention is not particularly concerned with the specifics of the hardware and storage systems used, but with their functional arrangement and content.
Figure 1 illustrates the general arrangement of one embodiment of the present invention. Personal computer, generally designated as 20, includes a display 22 and
5

keyboard 23. This allows for the desired intellectual content to be input. For example, the data may be text or a list of the names of the countries of South America. The data will be
explained in more detail below.
The data is then converted to speech, using a text to speech converter TTS 24. In this implementation, this is located at PC 20, but the TTS 24 may be located elsewhere. The speech data is then sent to a designated website, for processing. It will be appreciated that the present invention contemplates various forms of speech input. The website returns a coaural data set which includes non-linear stimulus audio 28, intended as a left ear signal 26, and intellectual content 29, intended as an audio stimulus 27 for the right ear.
This coaural data 21 is then sent back to the PC 20. This may be a real time or delayed process. The audio data may be in any suitable form. For example, it may be in the MP3 format widely used for portable music players, or any suitable analogue or digital format.
The coaural data 21 is preferably downloaded onto a medium suitable for an audio player 13. The audio player then reproduces the coaural signal as discrete signals to the left and right headphones 12, 11. Alternatively, the coaural signal could be directly output to speakers from PC 20.
The PC 20 could in a suitable implementation contain all the software necessary to compile the coaural signal. At an educational institution, a dedicated computer could be used to carry out the required processing and produce an audio signal on suitable media. Alternatively, essentially all functionality could be carried out at a website or in a networked remote server, with no substantial local software being required.
It is also contemplated that in addition to fully user defined content as described above, suitable pre-defined data could be made available for known subject matter. In this case, the step of producing the coaural data from the subject matter input would already have been performed when the user selects the desired data. The pre-defined data may, for example, be stored on a website or on storage media, and "State geography syllabus year
8" may be selected.
6

Figure 2 describes in more detail the process by which the coaural data is produced. Content 30 is input to PC 20. This is then sent via network 32 to server 33. This may be via any suitable network, for example the internet, a dial up connection, or even an offline mechanism. The content is preferably input as text into PC 20. However, in
alternative implementations the content could be a spoken audio signal, or any other input which the server 33 is adapted to process.
In this implementation, the text is converted to speech 24 at the server.
A voice modelling system 38 is used to enhance, modulate and add expression and variety to TTS signal or other human or computer-generated inputs though server 33 as a means of increasing attention and engagement of left brain and /or inhibiting boredom or preventing distraction of right brain.
A content assembly processor 42 may select by algorithms the intellectual content 37 as pre-processed by modelling system 38 and assemble this with silences, audible tags, null signals, or other features intended to add variety to the signal as a further means of inhibiting boredom or distraction of both right and left brains.
The above audible tags may in one embodiment link sets or subsets of audible data content to other previous or later sets or subsets of audible data in the same content assembly as a means of aiding co-location in the brain.
In a further embodiment the above audible tags may link sets or subsets of audible data content to visual user interface alphanumeric or visually coded tags on the screen of PC 20 or in other places whereby both aural and visual data may be identified as connected by the brain as an aid to neurological processing and subsequent co-location in the brain.
In parallel, a bank of preselected audio material 35 is used as the basis for the left
ear signal. This material may be pre-prepared spoken content, music, rhythmic sounds, or other data as will be described below in more detail. A suitable clock 39 and time base
7

algorithm 40 provide a signal to ensure that the timing of the assembled signal is appropriate to the desired user outcome.
Responsive to the time base signal, the assembler 42 prepares the separate left and right ear signals as a composite but twin discrete channel dataset. The output signal 39 is then output to the user 40, via mechanisms discussed above.
It is emphasised that the coaural audio signal is entirely different from conventional audio signals delivered via headphones or the like. It is not a stereo or other signal which seeks to produce an illusion of depth or sound space in the user. The intention in general is that the signals for each ear be monaural, and that the content be quite distinct. It is not the same mono channel content in each ear. The nature of the signal will be more apparent from the example below, however, the separateness of the channels - that they are in fact two signals, not two aspects of one signal - is important to understanding the present invention.
Figure 3 describes in more detail one implementation of the audio processing system. Via a suitable network 25, the required content is supplied to server 33. The TTS 24 processes the text content as previously discussed. However, the output is also processed to detect phonemes at detector 25. Audio source 44 provides a basic human voice or text-converted signal or a computer generated voice signal, which is further converted and combined with the voice data. The purpose of this step is to enhance, modulate and add expression and variety to the voice signal as a means of increasing attention and engagement of left brain and/or inhibiting boredom or preventing distraction of the right brain.
A voice tempo and pitch controller 36 inputs a rhythmic or arrhythmic time base in digital voice stream and in some versions balances this with decoded voice phonemes, feeding this stream to a music compiler 37 which establishes composite voice formats and digital base tracks on preparation for voice modelling in a DSP voice processor 38. The voice modeller 38 modifies the digital voice stream by imposing tone, modulation, voice style, voice gender, increases in pace and delivery, tonal and pitch variation to enhance
8

and make the voice tracks fed to it more engaging to the users, adding interest and variety to prevent boredom and maintain brain engagement.
A further processor 45 may select from the several discrete streams of content and
assemble this with silences, audible tags, null signals, or other features intended to add variety to the signal, and pass to processor 28 .
The final processed co-aural audio input is sent back via the internet to the PC 20 for downloading and play as previously described.
Figure 4 shows representations of a time domain signal 12 of a type imposed by blocks 36 and 37 of Figure 3, which time base indicates a typical 4 beats to the bar synchronised by midi time clock protocol on the data stream running between units 36 and 38 of Figure 3. The beat imposed is used to compile and insert melodic and/or staggered prose, song, words, numbers, null spaces and other content in a stream of content to modulate delivery and content variation so as to enhance the track and make it more engaging to the user.
Figure 4 further extracts section 13 as a representation of oscilloscope screens shown at 14 and 15 where the magnified section 13 indicates subdivisions of beats and assembly of phoneme controlled voice as song, prose, words, numbers, null spaces and other content. The snap-to-grid system of midi phoneme assembly represented at 14 and 14 of Figure 4 as controlled by units 36 and 37 of Figure 3 above thereby assembles the mixed voice, space and related variety of content tracks. By snap to grid is meant that the time domain signals are locked to the beat structure.
It will be appreciated that the software and hardware requirements may be met in large part using conventional modules and packages.
The actual content to be provided in various implementations will now be described with reference to the following tables. That is, the time when the elements of the intellectual content are delivered, and the timing both relatively and absolute of the
9

left ear channel. It is important to note that the best way to present particular content will vary with the nature of the content.
The timing of the stimuli may be presented in a variety of ways. In one form differing or regular time periods between each series of units of intellectual and non-intellectual content may be composed and delivered, which may vary in spacing either randomly, pseudo-randomly or in a predetermined pattern. In another embodiment regular spacings between each series of units of content may be used, or in other cases an irregular mixture of time spacing and signal insertion parameters.
The term beat signal is used in some of the examples below. A beat signal may be an audible code forming a series whereby the brain is enabled to recognize both sets or a sub-set of related content elements. This is an aid to information uptake by the user, to encouraging the information to be sited in a related or linked brain locus, and so to assisting recall of knowledge in sets or subsets of related information.
Each set of audio units may be vertically alternated within the same right or left channel field to provide variety, maintain the interest and reduce level of predictability, and so reduce boredom or distraction when listening to repeated content.
For the avoidance of doubt, it is emphasised that some intellectual content may be provided or either or both channels.
Some content may be best presented as a discrete list on the right ear side, and leading or trailing mnemonic labels on the left ear side. This may be most appropriate for core subject information, such as lists, alphabets, times tables, names, dates, places and the like. Table 1 below illustrates such an approach. The left ear channel has a zero or null signal mixed with beats or random audible tags inserted.
10

Table 1

Audio Unit(Subset No) Typical periodicity(in seconds). Typical left ear channel content. (In this case the non-intellectual right brain content). Middle(zero infill orsignalcrossover) Typical right ear channel content.(In this case intellectual left brain content or knowledge to be acquired).
1 0.0 Aural Tag 1 0 Battle of
2 0.5 Beat signal 0 0
3 1.8 0 0 Plev
4 2.9 Beat signal 0 0
5 3.3 0 0 na
6 3.4 space 0 0
7 4.8 0 0 eighteen
8 5.6 Tone signal, 0 0
9 7.1 0 0 seven
10 8.2 space 0 0
11 9.5 0 0 ty
12 10.5 Beat signal 0 0
13 12.7 space 0 0
14 13.3 0 0 Nine
Note that there is zero cross over or mid field signal. This is the preferred mid field signal situation.
Another form of content delivery involves the use of "aural marker codes" or "mnemonic aural labels" on the left ear channel which are followed (i) by a discrete normally compiled or aurally -diverse trailing or reprised version of the same list or other information assemblage on the right channel interspersed with zero signal feed on one or both sides occurring at (pseudo)random spacing at time periods predetermined by experiment according to content type but typically between 0.1 sees and 5 sees. This
method is outlined in Table 2.
This example illustrates some additional techniques. A space or silence (null signal) occurs simultaneously in left channel and right channel units as exampled by lines 2 to 6 inclusive; lines 10 to 13 inclusive of Table 2. This has the intended function of allowing brain synapses and other neurology in the planum temporale of the brain and
elsewhere time to either (a) neurologically reference knowledge unit to establish if that information unit is known and therefore not to be subject of further processing or (b) neurologically reflect on that information unit to establish if that unit is not known and
11

therefore to be subject of further processing (uptake to memory). This refers to postulated neurological process known to the inventor as "reflecto-referencing" which this invention is intended to promote when listening to content for purposes of study, learning or revision.
This example has state space inserted to allow reflecto-referencing mixed with beats or random tags. In this example the left and right channels are spatially configured with a varied time base having zero signals interspersed with other left and right signals.
Table 2

AudiounitNo Typical periodicity seconds. Time at completion Typical left ear channel content Mid field Content Typical right ear channel content
1 0.0 Beat signal BB1 0 0
2 2.5 0(Reflecto-space) reference 0 Battle of Plevna eighteen seventy nine
3 3.5 0 (Reflecto-space) reference 0 0
4 4.5 0 (Reflecto-space) reference 0 0
5 5.5 0 (Reflecto-space) reference 0 0
6 5.8 0 (Reflecto-space) reference 0 0
7 6.3 0 (Reflecto-space) reference 0 0
8 6.6 Beat signal BB2 0 0
9 7.9 0 0 Russo-Turkish War preceded Crimea
10 8.9 0 (Reflecto-space) reference 0 0
11 9.9 0 (Reflecto-space) reference 0 0
12 10.2 0 (Reflecto-space) reference 0 0
13 12.7 0 (Reflecto-space) reference 0 0
14 13.3 Beat signal BB3 0 Next subset

preferred embodiment regular or irregular cadence, rhythm, beat, or musical or tonal variations may be employed in composing audible content in left channel. Other Variations and possibilities for timing and content are possible within the general scope of the present invention.
In some few otherwise normal individuals all or parts of the functions of normal right and left brain are transposed. There is a conventional, simple user-administered test which allows this to be established and thus the headset channels reversed. Thus in these tables "right" means "left" and vice versa in the case of hemispherically transposed individuals.
It will be appreciated that the present invention could be implemented with a variety of audio hardware. In some implementations, the user may only select from a stored set of audio data. However, the method of the present invention enables this simple implementation. The content and optimum means of delivery is a matter which actual trials for each situation will establish. This is not a fully understood field.
13

CLAIMS:
1. A system for assisting knowledge acquisition by a user, wherein audio data is presented via a separate left ear signal and right ear signal, wherein said right ear signal includes predominantly preselected intellectual content, and said left ear signal includes predominantly non intellectual content, and each ear is presented with only the channel intended for that ear.
2. A system according to claim 2, wherein said separate signals are presented using earphones or a headset.
3. A system according to claim 1 or claim 2, wherein said right ear signal and left ear signal are selected and related so as to assist acquisition of specific knowledge selected by or for the user.
4. A system according to any one of the preceding claims, wherein the content of either or both signals has been processed and altered so as to enhance the non-predictability of the signal.
5. A system according to claim 4, wherein the left and right ear signals are time shifted relative to each other.
6. A system according to any one of the preceding claims, wherein the right ear
signal further includes music, beats, silences, audible tags or other non-intellectual material.
7. A system according to any one of the preceding claims, wherein the left ear signal includes some intellectual content.
8. A method of processing information for use in a system for assisting
knowledge acquisition by a user, said process including the steps of
Providing a set of content;
14

Processing said content so as to produce a set of coaural data; Providing said coaural data to a user.
9. A method according to claim 8, wherein the data is provided on a storage
medium.
10. A method according to claim 8 or claim 9, wherein the coaural data comprises a right ear signal including predominantly preselected intellectual content, and a left ear signal including predominantly non intellectual content.
11. A method according to any one of claims 8 to 10, wherein the content is predetermined and available for supply to a user.
12. A method according to any one of claims 8 to 10, wherein the intellectual content is produced using text content provided by the user.
13. A method according to any one of claims 8 to 12, wherein said right ear signal and left ear signal are selected and related so as to assist acquisition of specific knowledge selected by or for the user.
14. A method according to any one of claims 8 to 13, wherein the content of either or both signals has been processed and altered so as to enhance the non-predictability of the signal.
15. A method according to any one of claims 8 to 14, wherein the left and right ear signals are time shifted relative to each other.
16. A method according to to any one of claims 8 to 15, wherein the right ear signal further includes music, beats, silences, audible tags or other non-intellectual material.
17. A method according to any one of claims 8 to 16 wherein the left ear signal includes some intellectual content.
15

18. An audio data set , adapted to be reproduced as a sound signal, the set
including a separate left ear signal and right ear signal, wherein said right ear
signal includes predominantly preselected intellectual content, and said left
ear signal includes predominantly non intellectual content.
19. An audio data set according to claim 18, wherein the intellectual content is produced from text content supplied by a user.
20. An audio data set according to claim 18 or 19, wherein said right ear signal and left ear signal are selected and related so as to assist acquisition of specific knowledge selected by or for the user.
21. An audio data set according to any one of claims 18 to 20, wherein the content of either or both signals has been processed and altered so as to enhance the non-predictability of the signal.
22. An audio data set according to any one of claims 18 to 21, wherein the left and right ear signals are time shifted relative to each other.
23. An audio data set according to any one of claims 18 to 22, wherein the right ear signal further includes music, beats, silences, audible tags or other non-intellectual material.
24. An audio data set according to any one of claims 18 to 23, wherein the left ear signal includes some intellectual content.
25. A method of providing a processed audio file, including at least the steps of inputting, at a user location, text content; submitting said content to a remotely located server; processing said content to produce a corresponding audio file; and supplying said audio file.
26. A method according to claim 25, wherein the audio file is in coaural format.
16

27. A method according to claim 26, wherein a right ear signal includes predominantly preselected intellectual content, and a left ear signal includes predominantly non intellectual content.
28. A method according to any one of claims 25 to 28, wherein the user inputs said text content using a web interface.

rth
Dated this 6m day of February 2006

(Jose M A)
Agent for the Applicants
of Khaitan & Co

/>-^Z^L

17

ABSTRACT
A system and method which utilise separate channels to provide learning related content to each ear. The sound must be delivered specifically to the correct ear, for example by headphones. The content in one form may be to deliver intellectual content to the left ear, and predominantly non-intellectual content such as music to the right ear. In another form the content in the right ear may be a time shifted version of the left ear content. Applicable especially to assist in training, pre-exam study and cramming.

Documents:

142-MUMNP-2006-ABSTRACT(GRANTED)-(10-10-2008).pdf

142-mumnp-2006-abstract.doc

142-mumnp-2006-abstract.pdf

142-MUMNP-2006-CANCELLED PAGES(21-5-2008).pdf

142-MUMNP-2006-CLAIMS(AMENDED)-(21-5-2008).pdf

142-MUMNP-2006-CLAIMS(GRANTED)-(10-10-2008).pdf

142-mumnp-2006-claims.doc

142-mumnp-2006-claims.pdf

142-MUMNP-2006-CORRESPONDENCE(16-9-2008).pdf

142-MUMNP-2006-CORRESPONDENCE(IPO)-(4-11-2008).pdf

142-mumnp-2006-correspondence-received-ver-060206.pdf

142-mumnp-2006-correspondence-received-ver-140306.pdf

142-mumnp-2006-correspondence-received-ver-150406.pdf

142-mumnp-2006-correspondence-received-ver-220206.pdf

142-mumnp-2006-description (complete).pdf

142-MUMNP-2006-DESCRIPTION(GRANTED)-(10-10-2008).pdf

142-MUMNP-2006-DRAWING(GRANTED)-(10-10-2008).pdf

142-mumnp-2006-drawings.pdf

142-MUMNP-2006-FORM 1(22-2-2006).pdf

142-MUMNP-2006-FORM 18(3-7-2006).pdf

142-MUMNP-2006-FORM 2(GRANTED)-(10-10-2008).pdf

142-MUMNP-2006-FORM 2(TITLE PAGE)-(COMPLETE)-( 6-2-2006).pdf

142-MUMNP-2006-FORM 2(TITLE PAGE)-(GRANTED)-(10-10-2008).pdf

142-MUMNP-2006-FORM 3(16-3-2006).pdf

142-MUMNP-2006-FORM 3(16-9-2008).pdf

142-MUMNP-2006-FORM 3(20-4-2006).pdf

142-mumnp-2006-form-1.pdf

142-mumnp-2006-form-13.pdf

142-mumnp-2006-form-2.doc

142-mumnp-2006-form-2.pdf

142-mumnp-2006-form-26.pdf

142-mumnp-2006-form-3.pdf

142-mumnp-2006-form-5.pdf

142-MUMNP-2006-WO INTERNATIONAL PUBLICATION REPORT(6-2-2006).pdf

abstract1.jpg


Patent Number 224351
Indian Patent Application Number 142/MUMNP/2006
PG Journal Number 02/2009
Publication Date 09-Jan-2009
Grant Date 10-Oct-2008
Date of Filing 06-Feb-2006
Name of Patentee I P EQUITIES PTY LTD
Applicant Address 54 Alan Road, Berowra Heights, NSW 2082
Inventors:
# Inventor's Name Inventor's Address
1 WARD Bruce Winston 54 Alan Road, Berowra Heights, NSW 2082
PCT International Classification Number G06F19/00
PCT International Application Number PCT/AU2003/00876
PCT International Filing date 2003-07-08
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 NA