Jump to content

Talk:Language learning strategies

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 99.109.144.159 (talk) at 11:07, 28 July 2013 (→‎A neurological approach). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

WikiProject iconLinguistics: Applied Linguistics Unassessed
WikiProject iconThis article is within the scope of WikiProject Linguistics, a collaborative effort to improve the coverage of linguistics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
???This article has not yet received a rating on Wikipedia's content assessment scale.
???This article has not yet received a rating on the project's importance scale.
Taskforce icon
This article is supported by Applied Linguistics Task Force.

A neurological approach

Until now most language learning strategies have focused on a psychological approach. I am about to introduce a neurological approach that may hold promise in improving listening comprehension.

One of the most difficult aspects of learning a new language is listening comprehension. People generally store their native language on the left side of their brain and languages learned as an adult are generally stored on the right. People who have been in, say, a car accident who have had brain injury to their left side may lose their ability to speak their native language and be stuck with whatever language they learned as an adult. People who learn a new language past eleven tend to have an accent.

Ninety five percent of right handers process that information on the left side of their brain. Information heard from the right ear is generally processed on the left side of the brain. Generally the ear that you speak your native language on the phone with is the ear associated with the side of the brain that processes your native language. If you can figure out what ear sends information to the side of the brain associated with your native language and begin listening to the target foreign language in that ear only, you can improve listening comprehension and have the information process and stored on the proper side of the brain. A better suggestion I found is to listen to something in a foreign language in one ear, listen to it in the other, and then listen to it in both. Over time it does seem to help improve listening comprehension and retention of the true sounds in ways that only listening to the information in both ears at once does not. What one would find is that sometimes some words sound clearer in one ear and other words sound clearer in the other. Exercising each ear individually until all words are clear in each ear and separately listening to the words in both ears until they are also clear in both seems to improve overall listening comprehension. The order doesn't necessarily matter, it is probably best to mix it up. Start with the right ear at times and continue with the left and at other times start with the left ear and continue with the right and then at other times start with both ears and then move on to listening to each ear individually. This way the brain is trained to properly perceive the words no matter where they come from (the left side or the right) in a real life situation (since reality is random and sporadic and does not occur under controlled conditions).

It's weird but it's almost as if the sounds heard from one ear are processed off sync with the same sounds heard from the other. Maybe partly because they arrive at each ear at a different time? This seems to cause the signals in the brain to sorta collide off sync which causes them to mesh wrong and cancel out. This might also partly explain why listening is more difficult when the speaker talks fast, the effects of desynchronization become more pronounced. There has been limited research showing the brain can distinguish sound arrival time from one year to the other and use that information to its advantage (ie: to help determine the location of a sound or to help calculate the true sounds arriving from a given direction when interfered with sounds from another direction. Though this research has mostly seen results only in low pitched sounds). The ability of the brain to take advantage of and properly process the difference in sound arrival time might work better at a younger age when the ears are more finely tuned but that ability may diminish with age.

With your native language your brain sorta fills in the 'unfocused' sounds with what it knows but with a new language it can't. If you listen with one ear at a time this unfocused effect seems to go away and the sound seems to become 'focused'. Maybe with headphones, since both sounds arrive at the ears at the same exact time, this problem is less pronounced. But if the delay has to do with the brain or the ears or the ear-brain complex then maybe an artificial delay can be introduced to someone's headphones in one ear compared to the other to sorta place the signals in sync when they reach the brain. The delay may vary from person to person though, just like how prescription glasses vary from person to person.

The above idea is not subject to patent or copyright protection and any programs utilizing any of the above information can not be subject to intellectual property. Anyone may create a language learning tool that utilizes these ideas (ie: by having something played in one ear, then the other, then both, or by synchronizing the sounds or by using any of the above language learning suggestions) but no one may use intellectual property laws to prevent others from doing the same.

— Preceding unsigned comment added by 99.109.144.159 (talk) 17:22, 25 July 2013 (UTC)[reply]