|
Simon (HB9DRV) wrote:
>> ...but the Morse code decoding needs some work. > I agree - I would love to have a month later this year to work on CW > decoding. I have no doubts that a computer can decode better than a human, > just needs someone (!) to write the decoder. Oh, ye of great faith! Just think of what a hard time computers have with tough processing problems like recognition of natural language, i.e. freely spoken speech with a machine that is not trained by that particular speaker. How easy it is for a human, and how hard it is for a machine even after decades of work by lots of people. Sverre LA3ZA _______________________________________________ Elecraft mailing list Post to: [hidden email] You must be a subscriber to post to the list. Subscriber Info (Addr. Change, sub, unsub etc.): http://mailman.qth.net/mailman/listinfo/elecraft Help: http://mailman.qth.net/subscribers.htm Elecraft web page: http://www.elecraft.com
Sverre, LA3ZA
K2 #2198, K3 #3391, LA3ZA Blog: http://la3za.blogspot.com, LA3ZA Unofficial Guide to K2 modifications: http://la3za.blogspot.com/p/la3za-unofficial-guide-to-elecraft-k2.html |
|
Programmers got to believe Sverre or nothing will get done. And Simon is a programmer! So, it it can be done, Simon will do it. If it can't be done, he will come closer than the rest. And, I don't know any humans that can copy RTTY by ear, or PSK-31 and the computer manages to do that.
Willis 'Cookie' Cooke K5EWJ --- On Wed, 1/21/09, Sverre Holm <[hidden email]> wrote: > From: Sverre Holm <[hidden email]> > Subject: [Elecraft] HRD cw copy > To: [hidden email] > Date: Wednesday, January 21, 2009, 3:18 PM > Simon (HB9DRV) wrote: > > >> ...but the Morse code decoding needs some work. > > > I agree - I would love to have a month later this year > to work on CW > > decoding. I have no doubts that a computer can decode > better than a human, > > just needs someone (!) to write the decoder. > > Oh, ye of great faith! > > Just think of what a hard time computers have with tough > processing problems like recognition of natural language, > i.e. freely spoken speech with a machine that is not > trained by that particular speaker. How easy it is for a > human, and > how hard it is for a machine even after decades of work by > lots of people. > > > Sverre > LA3ZA > > _______________________________________________ > Elecraft mailing list > Post to: [hidden email] > You must be a subscriber to post to the list. > Subscriber Info (Addr. Change, sub, unsub etc.): > http://mailman.qth.net/mailman/listinfo/elecraft > > Help: http://mailman.qth.net/subscribers.htm > Elecraft web page: http://www.elecraft.com Elecraft mailing list Post to: [hidden email] You must be a subscriber to post to the list. Subscriber Info (Addr. Change, sub, unsub etc.): http://mailman.qth.net/mailman/listinfo/elecraft Help: http://mailman.qth.net/subscribers.htm Elecraft web page: http://www.elecraft.com |
|
CONTENTS DELETED
The author has deleted this message.
|
|
In reply to this post by WILLIS COOKE
On Wed, 2009-01-21 at 17:18, WILLIS COOKE wrote:
... > And, I don't know any humans that can copy RTTY by ear, > or PSK-31 and the computer manages to do that. Years ago I met a fellow visiting W1AW who could copy RTTY by ear. He couldn't keep up with tape-sent RTTY at 60 wpm but could copy somebody hunt-and-pecking on a keyboard. At that time I could identify by ear "RYRYRYRY" and "QST DE W1AW" but not much more than that. Al N1AL _______________________________________________ Elecraft mailing list Post to: [hidden email] You must be a subscriber to post to the list. Subscriber Info (Addr. Change, sub, unsub etc.): http://mailman.qth.net/mailman/listinfo/elecraft Help: http://mailman.qth.net/subscribers.htm Elecraft web page: http://www.elecraft.com |
|
In reply to this post by Sverre Holm (LA3ZA)
But copying CW isn't like trying to understand natural language. If
computers can now beat grandmasters at chess, computers should be able to copy any code that a good operator can decipher. I don't even think we need more powerful computers; we just need better algorithms. 73! Dan KB6NU ---------------------------------------------------------- CW Geek and ARRL Volunteer Instructor Read my ham radio blog at http://www.kb6nu.com LET'S REALLY MAKE THE ARRL THE NATIONAL ASSOCIATION FOR HAM RADIO On Jan 21, 2009, at 6:18 PM, Sverre Holm wrote: > Simon (HB9DRV) wrote: > >>> ...but the Morse code decoding needs some work. > >> I agree - I would love to have a month later this year to work on CW >> decoding. I have no doubts that a computer can decode better than a >> human, >> just needs someone (!) to write the decoder. > > Oh, ye of great faith! > > Just think of what a hard time computers have with tough processing > problems like recognition of natural language, > i.e. freely spoken speech with a machine that is not trained by > that particular speaker. How easy it is for a human, and > how hard it is for a machine even after decades of work by lots of > people. > > > Sverre > LA3ZA Elecraft mailing list Post to: [hidden email] You must be a subscriber to post to the list. Subscriber Info (Addr. Change, sub, unsub etc.): http://mailman.qth.net/mailman/listinfo/elecraft Help: http://mailman.qth.net/subscribers.htm Elecraft web page: http://www.elecraft.com |
|
In reply to this post by WILLIS COOKE
It is interesting to see the responses to my statement on the difficulty
of machines copying CW better than humans. Although this is a little off-topic here, I hope we can have a short discussion of it anyway. First, the success of negative SNR communications methods such as Olivia,JT65, and PSK31, are evidence that a well-designed computer algorithm should perform better than a human. But it is on codes that have been designed for machine decoding. Second, 'better' may mean many things: faster, many QSOs in parallel, or - what I imply - at lower SNR and under difficult conditions with fading and interference. There is no doubt that a computer has much more capacity for speed and parallel decoding than a human. The steps that a good algorithm needs to do are something like this: - real-time frequency analysis and filtering - detect morse signal and lock on to a particular frequency - adaptive estimation of datarate and adaptive matched filtering for optimal detection - decoding of dashes/dots/spaces into letters - decoding into words The first steps are signal processing such as filtering, detection and adaptivity. See e.g. http://www.journal.au.edu/ijcim/jan99/ijcim_ar1.html for some ideas on the adaptive estimation. As a side remark, Coherent CW, was a way of avoiding the adaptation to variable rate and ease machine decoding, but it does not seem to be a success. I believe that it takes an extraordinary algorithm to lock onto a very weak signal reliably, but even more so to do the last and maybe even the second last step, and that this is where the similarity with speech recognition is largest. As an example, say that my call is a weak DX-call and I'm sending CQ de LA3ZA LA3ZA LA3ZA. On the receiver end you hear DA---, LA3-T, L-3ZA due to fading and interference. This is where a good operator is able to use a priori information on the syntax of a callsign, similarities between morse codes for various letters, and the three partial calls to piece this together to LA3ZA. I'm not saying this is not doable, only that it may take more than a month for a good programmer to do this, and maybe much more also. -- Sverre 2008/2009: F/LA3ZA _______________________________________________ Elecraft mailing list Post to: [hidden email] You must be a subscriber to post to the list. Subscriber Info (Addr. Change, sub, unsub etc.): http://mailman.qth.net/mailman/listinfo/elecraft Help: http://mailman.qth.net/subscribers.htm Elecraft web page: http://www.elecraft.com |
|
Administrator
|
In reply to this post by Dan Romanchik KB6NU
Dan Romanchik KB6NU wrote:
> But copying CW isn't like trying to understand natural language. If > computers can now beat grandmasters at chess, computers should be able > to copy any code that a good operator can decipher. I don't even > think we need more powerful computers; we just need better algorithms. Humans use lexicographical and semantic clues to fill in dropped CW characters, and computers can do the same. But this goes way beyond the simple signal processing used in, say, the K3's present CW decoder or the one used in HRD. (I studied natural language recognition in college and was anxious to play with either neural networks or traditional AI methods as the foundation for CW decoding, but my other classes got in the way :) One idea from the early days of AI is the so-called "blackboard" model. Imagine a garbled sentence on a blackboard, with various experts offering their opinions about what each letter and word is based on their specialized knowledge of word morphology, letter frequency, syntax, semantics, etc. You weigh these opinions based on degree of confidence, and once there's enough evidence for a letter or word, you fill it in, which in turn offers additional information to the highest-level expert, who might be considering the actual meaning of a phrase. His predictions can then strengthen the evidence for lower level symbols, and so on. Such methods are very algorithm-intensive, but might be useful for some aspects of CW stream parsing. A neural network could handle this, too, and has the advantage of self-organization. This is how I'd approach it (assuming unlimited free time--not!). You could use any of several different types of networks that have been proven successful at NLP (natural language processing). For example, you might take the incoming CW, break it into samples (say a few samples per bit at the highest code speed to be processed), shift the serial data representing 5 to 20 letters into a serial-to-parallel shift register, then feed the parallel data to the network's inputs. Or you could use a network with internal feedback (memory), with just one input, which itself could be "fuzzy" (the analog voltage from an envelope detector) or digital (0 or 1 depending on the output of a comparator, looking at the CW stream). The output might be a parallel binary word, perhaps ASCII, or a single output with multiple levels, where the voltage itself represents a symbol. To make this work, you need at least three things: an input representation that provides adequate context (e.g., if you want to decode a letter, the input should contain at least a few letters on either side of the target); a sufficiently complex network; and a large corpus of clean text with which to train the network (probably thousands of words, drawn from actual on-air content). One classic method of training the network involves placing known-good signals at the input, then comparing the desired outputs to the actual outputs, and "back-propagating" the resulting error through the network--from outputs to hidden layers to inputs--so that the network's nodes gradually acquire the proper "weights." Once the network has been trained to the point that it perfectly copies clean CW, you can then present it with a noisy signal stream. A well-designed network would be able to correct dropped CW elements or even letters if its internal representation is highly evolved. The network will have learned language-specific rules, and you don't have to know how it works, anymore than you know how your own brain does it. The actual implementation is left as an exercise for the reader. If you come up with an algorithm written in 'C', let me know and I'll try to port it to the K3's PIC. Wayne N6KR --- http://www.elecraft.com _______________________________________________ Elecraft mailing list Post to: [hidden email] You must be a subscriber to post to the list. Subscriber Info (Addr. Change, sub, unsub etc.): http://mailman.qth.net/mailman/listinfo/elecraft Help: http://mailman.qth.net/subscribers.htm Elecraft web page: http://www.elecraft.com |
|
In reply to this post by Sverre Holm-3
Weak signal modes like WSPR, JT65, MFSK etc work as well as they do and can dig below the noise because they use different tones, rather than tone/no tone as in CW. The timing of the signal elements is also precisely known. Even with computer sent morse the program does not know the speed at which it is being sent, so it has to work that out before it can start. The vagaries of propagation then throw their spanner in the works, as the decoding algorithm does not know if absence of a tone is a valid signal element, or QSB. If you then throw in the imprecision of timing caused by hand sent morse, then you can see the computer algorithm really has a hard job to do. Computer morse decoding algorithms in use currently go no further than assuming the tones are clearly distinguishable from the spaces and that the timing of the elements are predictable. To improve the decoding performance would I think require the application of artificial intelligence to get the computer to reasses what it first thinks it receives in the light of what makes sense in the context of an amateur QSO. This is pretty much what we do when we receive code by ear. First of all the human brain is probably more adaptive to irregular element timing - left footed sending - than a computer algorithm. It "learns" the guy's rhythm and uses that to decode what he is sending, rather than the rigid symbol lengths of computer generated morse. Secondly the human brain uses context and knowledge to fill in the gaps and make sense of what is received. If someone sends "QTH IS" you expect a place name to follow. If you miss a couple of letters or what you got doesn't look like a word you use your knowledge to work out what it would be. It probably would be possible to write a computer program to do that but it would be an incredibly challenging piece of programming that would need an extremely keen mind and a great deal of time to accomplish. It would probably be a PhD level project.
Julian, G4ILO. K2 #392 K3 #222 KX3 #110
* G4ILO's Shack - http://www.g4ilo.com * KComm - http://www.g4ilo.com/kcomm.html * KTune - http://www.g4ilo.com/ktune.html |
|
In reply to this post by Dan Romanchik KB6NU
----- Original Message -----
From: "Dan Romanchik KB6NU" <[hidden email]> > I don't even > think we need more powerful computers; we just need better algorithms. Exactly! Simon Brown, HB9DRV www.ham-radio-deluxe.com _______________________________________________ Elecraft mailing list Post to: [hidden email] You must be a subscriber to post to the list. Subscriber Info (Addr. Change, sub, unsub etc.): http://mailman.qth.net/mailman/listinfo/elecraft Help: http://mailman.qth.net/subscribers.htm Elecraft web page: http://www.elecraft.com |
|
In reply to this post by wayne burdick
> -----Original Message-----
> From: [hidden email] [mailto:elecraft- > [hidden email]] On Behalf Of wayne burdick > Sent: Thursday, January 22, 2009 9:18 AM > To: Dan Romanchik KB6NU > Cc: Elecraft Mailing List > Subject: Re: [Elecraft] HRD cw copy > > The actual implementation is left as an exercise for the reader. If you > come up with an algorithm written in 'C', let me know and I'll try to > port it to the K3's PIC. > Sounds simple! I'm busy today, so would <someone> please get on this so I can try it out this weekend? Thanks! ;-) Adam - ka7ark _______________________________________________ Elecraft mailing list Post to: [hidden email] You must be a subscriber to post to the list. Subscriber Info (Addr. Change, sub, unsub etc.): http://mailman.qth.net/mailman/listinfo/elecraft Help: http://mailman.qth.net/subscribers.htm Elecraft web page: http://www.elecraft.com |
|
In reply to this post by wayne burdick
Wayne Burdick wrote: Humans use lexicographical and semantic clues to fill in dropped CW characters, and computers can do the same. But this goes way beyond the simple signal processing used in, say, the K3's present CW decoder or the one used in HRD. (I studied natural language recognition in college and was anxious to play with either neural networks or traditional AI methods as the foundation for CW decoding, but my other classes got in the way :) One idea from the early days of AI is the so-called "blackboard" model. Imagine a garbled sentence on a blackboard, with various experts offering their opinions about what each letter and word is based on their specialized knowledge of word morphology, letter frequency, syntax, semantics, etc. You weigh these opinions based on degree of confidence, and once there's enough evidence for a letter or word, you fill it in, which in turn offers additional information to the highest-level expert, who might be considering the actual meaning of a phrase. His predictions can then strengthen the evidence for lower level symbols, and so on. Such methods are very algorithm-intensive, but might be useful for some aspects of CW stream parsing. A neural network could handle this, too, and has the advantage of self-organization. This is how I'd approach it (assuming unlimited free time--not!). You could use any of several different types of networks that have been proven successful at NLP (natural language processing). For example, you might take the incoming CW, break it into samples (say a few samples per bit at the highest code speed to be processed), shift the serial data representing 5 to 20 letters into a serial-to-parallel shift register, then feed the parallel data to the network's inputs. Or you could use a network with internal feedback (memory), with just one input, which itself could be "fuzzy" (the analog voltage from an envelope detector) or digital (0 or 1 depending on the output of a comparator, looking at the CW stream). The output might be a parallel binary word, perhaps ASCII, or a single output with multiple levels, where the voltage itself represents a symbol. To make this work, you need at least three things: an input representation that provides adequate context (e.g., if you want to decode a letter, the input should contain at least a few letters on either side of the target); a sufficiently complex network; and a large corpus of clean text with which to train the network (probably thousands of words, drawn from actual on-air content). One classic method of training the network involves placing known-good signals at the input, then comparing the desired outputs to the actual outputs, and "back-propagating" the resulting error through the network--from outputs to hidden layers to inputs--so that the network's nodes gradually acquire the proper "weights." Once the network has been trained to the point that it perfectly copies clean CW, you can then present it with a noisy signal stream. A well-designed network would be able to correct dropped CW elements or even letters if its internal representation is highly evolved. The network will have learned language-specific rules, and you don't have to know how it works, anymore than you know how your own brain does it. The actual implementation is left as an exercise for the reader. If you come up with an algorithm written in 'C', let me know and I'll try to port it to the K3's PIC. Wayne N6KR Sounds good, Wayne. When can you have it done? Upper right hand button would be my choice. 73 de Terry, W0FM _______________________________________________ Elecraft mailing list Post to: [hidden email] You must be a subscriber to post to the list. Subscriber Info (Addr. Change, sub, unsub etc.): http://mailman.qth.net/mailman/listinfo/elecraft Help: http://mailman.qth.net/subscribers.htm Elecraft web page: http://www.elecraft.com |
|
Wayne,
This is great stuff, but.... Suggestion for Elecraft: make a K3 panadaptor and a KW automatic, SO2R amp higher priorities! 73, Andy, AE6Y ----- Original Message ----- From: "Terry Schieler" <[hidden email]> To: "'wayne burdick'" <[hidden email]>; "'Dan Romanchik KB6NU'" <[hidden email]> Cc: "'Elecraft Mailing List'" <[hidden email]> Sent: Thursday, January 22, 2009 1:04 PM Subject: Re: [Elecraft] CW copy: Wayne's solution > > Wayne Burdick wrote: > > Humans use lexicographical and semantic clues to fill in dropped CW > characters, and computers can do the same. But this goes way beyond the > simple signal processing used in, say, the K3's present CW decoder or > the one used in HRD. (I studied natural language recognition in college > and was anxious to play with either neural networks or traditional AI > methods as the foundation for CW decoding, but my other classes got in > the way :) > > One idea from the early days of AI is the so-called "blackboard" model. > Imagine a garbled sentence on a blackboard, with various experts > offering their opinions about what each letter and word is based on > their specialized knowledge of word morphology, letter frequency, > syntax, semantics, etc. You weigh these opinions based on degree of > confidence, and once there's enough evidence for a letter or word, you > fill it in, which in turn offers additional information to the > highest-level expert, who might be considering the actual meaning of a > phrase. His predictions can then strengthen the evidence for lower > level symbols, and so on. Such methods are very algorithm-intensive, > but might be useful for some aspects of CW stream parsing. > > A neural network could handle this, too, and has the advantage of > self-organization. This is how I'd approach it (assuming unlimited free > time--not!). You could use any of several different types of networks > that have been proven successful at NLP (natural language processing). > > For example, you might take the incoming CW, break it into samples (say > a few samples per bit at the highest code speed to be processed), shift > the serial data representing 5 to 20 letters into a serial-to-parallel > shift register, then feed the parallel data to the network's inputs. Or > you could use a network with internal feedback (memory), with just one > input, which itself could be "fuzzy" (the analog voltage from an > envelope detector) or digital (0 or 1 depending on the output of a > comparator, looking at the CW stream). The output might be a parallel > binary word, perhaps ASCII, or a single output with multiple levels, > where the voltage itself represents a symbol. > > To make this work, you need at least three things: an input > representation that provides adequate context (e.g., if you want to > decode a letter, the input should contain at least a few letters on > either side of the target); a sufficiently complex network; and a large > corpus of clean text with which to train the network (probably > thousands of words, drawn from actual on-air content). > > One classic method of training the network involves placing known-good > signals at the input, then comparing the desired outputs to the actual > outputs, and "back-propagating" the resulting error through the > network--from outputs to hidden layers to inputs--so that the network's > nodes gradually acquire the proper "weights." Once the network has been > trained to the point that it perfectly copies clean CW, you can then > present it with a noisy signal stream. A well-designed network would be > able to correct dropped CW elements or even letters if its internal > representation is highly evolved. The network will have learned > language-specific rules, and you don't have to know how it works, > anymore than you know how your own brain does it. > > The actual implementation is left as an exercise for the reader. If you > come up with an algorithm written in 'C', let me know and I'll try to > port it to the K3's PIC. > > Wayne > N6KR > > > Sounds good, Wayne. When can you have it done? Upper right hand button > would be my choice. > > 73 de Terry, W0FM > > > > > _______________________________________________ > Elecraft mailing list > Post to: [hidden email] > You must be a subscriber to post to the list. > Subscriber Info (Addr. Change, sub, unsub etc.): > http://mailman.qth.net/mailman/listinfo/elecraft > > Help: http://mailman.qth.net/subscribers.htm > Elecraft web page: http://www.elecraft.com _______________________________________________ Elecraft mailing list Post to: [hidden email] You must be a subscriber to post to the list. Subscriber Info (Addr. Change, sub, unsub etc.): http://mailman.qth.net/mailman/listinfo/elecraft Help: http://mailman.qth.net/subscribers.htm Elecraft web page: http://www.elecraft.com |
|
This would have been an interesting project back when K6XN and I were
at Schlumberger's AI lab. 73, doug From: "Andrew Faber" <[hidden email]> Date: Thu, 22 Jan 2009 13:26:42 -0800 Wayne, This is great stuff, but.... Suggestion for Elecraft: make a K3 panadaptor and a KW automatic, SO2R amp higher priorities! 73, Andy, AE6Y ----- Original Message ----- From: "Terry Schieler" <[hidden email]> To: "'wayne burdick'" <[hidden email]>; "'Dan Romanchik KB6NU'" <[hidden email]> Cc: "'Elecraft Mailing List'" <[hidden email]> Sent: Thursday, January 22, 2009 1:04 PM > > Wayne Burdick wrote: > > Humans use lexicographical and semantic clues to fill in dropped CW > characters, and computers can do the same. But this goes way beyond the > simple signal processing used in, say, the K3's present CW decoder or > the one used in HRD. (I studied natural language recognition in college > and was anxious to play with either neural networks or traditional AI > methods as the foundation for CW decoding, but my other classes got in > the way :) _______________________________________________ Elecraft mailing list Post to: [hidden email] You must be a subscriber to post to the list. Subscriber Info (Addr. Change, sub, unsub etc.): http://mailman.qth.net/mailman/listinfo/elecraft Help: http://mailman.qth.net/subscribers.htm Elecraft web page: http://www.elecraft.com |
|
In reply to this post by Terry Schieler
On Thu, 22 Jan 2009 15:04:11 -0600
"Terry Schieler" <[hidden email]> wrote: > > Wayne Burdick wrote: > > Humans use lexicographical and semantic clues to fill in dropped CW > characters, and computers can do the same. But this goes way beyond the > simple signal processing used in, say, the K3's present CW decoder or > the one used in HRD. (I studied natural language recognition in college > and was anxious to play with either neural networks or traditional AI > methods as the foundation for CW decoding, but my other classes got in > the way :) > > One idea from the early days of AI is the so-called "blackboard" model. > Imagine a garbled sentence on a blackboard, with various experts > offering their opinions about what each letter and word is based on > their specialized knowledge of word morphology, letter frequency, > syntax, semantics, etc. You weigh these opinions based on degree of > confidence, and once there's enough evidence for a letter or word, you > fill it in, which in turn offers additional information to the > highest-level expert, who might be considering the actual meaning of a > phrase. His predictions can then strengthen the evidence for lower > level symbols, and so on. Such methods are very algorithm-intensive, > but might be useful for some aspects of CW stream parsing. > > A neural network could handle this, too, and has the advantage of > self-organization. This is how I'd approach it (assuming unlimited free > time--not!). You could use any of several different types of networks > that have been proven successful at NLP (natural language processing). > >For example, you might take the incoming CW, break it into samples (say > a few samples per bit at the highest code speed to be processed), shift > the serial data representing 5 to 20 letters into a serial-to-parallel > shift register, then feed the parallel data to the network's inputs. Or > you could use a network with internal feedback (memory), with just one > input, which itself could be "fuzzy" (the analog voltage from an > envelope detector) or digital (0 or 1 depending on the output of a > comparator, looking at the CW stream). The output might be a parallel > binary word, perhaps ASCII, or a single output with multiple levels, > where the voltage itself represents a symbol. > > To make this work, you need at least three things: an input > representation that provides adequate context (e.g., if you want to > decode a letter, the input should contain at least a few letters on > either side of the target); a sufficiently complex network; and a large > corpus of clean text with which to train the network (probably > thousands of words, drawn from actual on-air content). > > One classic method of training the network involves placing known-good > signals at the input, then comparing the desired outputs to the actual > outputs, and "back-propagating" the resulting error through the > network--from outputs to hidden layers to inputs--so that the network's > nodes gradually acquire the proper "weights." Once the network has been > trained to the point that it perfectly copies clean CW, you can then > present it with a noisy signal stream. A well-designed network would be > able to correct dropped CW elements or even letters if its internal > representation is highly evolved. The network will have learned > language-specific rules, and you don't have to know how it works, > anymore than you know how your own brain does it. > > The actual implementation is left as an exercise for the reader. If you > come up with an algorithm written in 'C', let me know and I'll try to > port it to the K3's PIC. > > Wayne > N6KR > > > Sounds good, Wayne. When can you have it done? Upper right hand button > would be my choice. > > 73 de Terry, W0FM Why would anybody want to use a "CW decoder" in the first place??? I guess that we who can "mentally" copy CW in our BRAINS have an advantage over those who really did'nt APPLY THEMSELVES to accomplish what we did (thousands WORLDWIDE!!!) I'm not an elitist nor are our other "Brethren" who can copy "Intl. Morse", we just appreciate it's VALUE, not only in past years, but current as well!!! Regards, Jim/nn6ee _______________________________________________ Elecraft mailing list Post to: [hidden email] You must be a subscriber to post to the list. Subscriber Info (Addr. Change, sub, unsub etc.): http://mailman.qth.net/mailman/listinfo/elecraft Help: http://mailman.qth.net/subscribers.htm Elecraft web page: http://www.elecraft.com |
|
In reply to this post by wayne burdick
Me thinks you've hit upon one of the man's loves! :)
On Thu, 2009-01-22 at 09:18 -0800, wayne burdick wrote: > Dan Romanchik KB6NU wrote: > > > But copying CW isn't like trying to understand natural language. If > > computers can now beat grandmasters at chess, computers should be able > > to copy any code that a good operator can decipher. I don't even > > think we need more powerful computers; we just need better algorithms. > > Humans use lexicographical and semantic clues to fill in dropped CW > characters, and computers can do the same. But this goes way beyond the > simple signal processing used in, say, the K3's present CW decoder or > the one used in HRD. (I studied natural language recognition in college > and was anxious to play with either neural networks or traditional AI > methods as the foundation for CW decoding, but my other classes got in > the way :) <.....> _______________________________________________ Elecraft mailing list Post to: [hidden email] You must be a subscriber to post to the list. Subscriber Info (Addr. Change, sub, unsub etc.): http://mailman.qth.net/mailman/listinfo/elecraft Help: http://mailman.qth.net/subscribers.htm Elecraft web page: http://www.elecraft.com |
|
In reply to this post by Julian, G4ILO
I have probably less than a rudimentary understanding of the codec used to decode analog signals from a hard drive but maybe the PRML algorithm could be used here. I am sure the hard drive noise is significantly different but the signal to noise probably is not. It might be helpful to use a previously invented wheel (PRML:Partial Read Maximum Likelyhood) Also the hardware might be available or modifiable cheaply, after all every hard drive (millions) has a PRML codec involved in it's circuitry. PRML might be usable here at the first layer somehow. Tom Price WA6SUS |
|
In reply to this post by JIM DAVIS-11
----- Original Message -----
From: "JIM DAVIS" <[hidden email]> > Why would anybody want to use a "CW decoder" in the first place??? When you've received a QSL card from a deaf Danish ham as I did in 1979 you'll realise why. This ham decoded by placing his 'listening' finger on the cone of the loudspeaker. Simon Brown, HB9DRV www.ham-radio-deluxe.com _______________________________________________ Elecraft mailing list Post to: [hidden email] You must be a subscriber to post to the list. Subscriber Info (Addr. Change, sub, unsub etc.): http://mailman.qth.net/mailman/listinfo/elecraft Help: http://mailman.qth.net/subscribers.htm Elecraft web page: http://www.elecraft.com |
|
In reply to this post by JIM DAVIS-11
Perhaps for the same kind of reasons people use a DX Cluster instead of tuning round the bands and listening. CW will always have the unique advantage that it can be copied without computer assistance, but it is still a digital mode, and if the fact that it can be sent and received using a computer makes more people use the mode, I don't think that's a bad thing.
Julian, G4ILO. K2 #392 K3 #222 KX3 #110
* G4ILO's Shack - http://www.g4ilo.com * KComm - http://www.g4ilo.com/kcomm.html * KTune - http://www.g4ilo.com/ktune.html |
|
In reply to this post by JIM DAVIS-11
Jim,
I thought that my sarcasm would be obvious in my post below Wayne's. I probably should have included a :o). 73, Terry...W0FM :o) :o) :o) :o) -----Original Message----- From: JIM DAVIS [mailto:[hidden email]] Sent: Thursday, January 22, 2009 5:49 PM To: Terry Schieler; [hidden email] Subject: Re: [Elecraft] CW copy: Wayne's solution---------------WHY??? On Thu, 22 Jan 2009 15:04:11 -0600 "Terry Schieler" <[hidden email]> wrote: > > Wayne Burdick wrote: > > Humans use lexicographical and semantic clues to fill in dropped CW > characters, and computers can do the same. But this goes way beyond the > simple signal processing used in, say, the K3's present CW decoder or > the one used in HRD. (I studied natural language recognition in college > and was anxious to play with either neural networks or traditional AI > methods as the foundation for CW decoding, but my other classes got in > the way :) > > One idea from the early days of AI is the so-called "blackboard" model. > Imagine a garbled sentence on a blackboard, with various experts > offering their opinions about what each letter and word is based on > their specialized knowledge of word morphology, letter frequency, > syntax, semantics, etc. You weigh these opinions based on degree of > confidence, and once there's enough evidence for a letter or word, you > fill it in, which in turn offers additional information to the > highest-level expert, who might be considering the actual meaning of a > phrase. His predictions can then strengthen the evidence for lower > level symbols, and so on. Such methods are very algorithm-intensive, > but might be useful for some aspects of CW stream parsing. > > A neural network could handle this, too, and has the advantage of > self-organization. This is how I'd approach it (assuming unlimited free > time--not!). You could use any of several different types of networks > that have been proven successful at NLP (natural language processing). > >For example, you might take the incoming CW, break it into samples (say > a few samples per bit at the highest code speed to be processed), shift > the serial data representing 5 to 20 letters into a serial-to-parallel > shift register, then feed the parallel data to the network's inputs. Or > you could use a network with internal feedback (memory), with just one > input, which itself could be "fuzzy" (the analog voltage from an > envelope detector) or digital (0 or 1 depending on the output of a > comparator, looking at the CW stream). The output might be a parallel > binary word, perhaps ASCII, or a single output with multiple levels, > where the voltage itself represents a symbol. > > To make this work, you need at least three things: an input > representation that provides adequate context (e.g., if you want to > decode a letter, the input should contain at least a few letters on > either side of the target); a sufficiently complex network; and a large > corpus of clean text with which to train the network (probably > thousands of words, drawn from actual on-air content). > > One classic method of training the network involves placing known-good > signals at the input, then comparing the desired outputs to the actual > outputs, and "back-propagating" the resulting error through the > network--from outputs to hidden layers to inputs--so that the network's > nodes gradually acquire the proper "weights." Once the network has been > trained to the point that it perfectly copies clean CW, you can then > present it with a noisy signal stream. A well-designed network would be > able to correct dropped CW elements or even letters if its internal > representation is highly evolved. The network will have learned > language-specific rules, and you don't have to know how it works, > anymore than you know how your own brain does it. > > The actual implementation is left as an exercise for the reader. If you > come up with an algorithm written in 'C', let me know and I'll try to > port it to the K3's PIC. > > Wayne > N6KR > > > Sounds good, Wayne. When can you have it done? Upper right hand button > would be my choice. > > 73 de Terry, W0FM ******** Why would anybody want to use a "CW decoder" in the first place??? I guess that we who can "mentally" copy CW in our BRAINS have an advantage over those who really did'nt APPLY THEMSELVES to accomplish what we did (thousands WORLDWIDE!!!) I'm not an elitist nor are our other "Brethren" who can copy "Intl. Morse", we just appreciate it's VALUE, not only in past years, but current as well!!! Regards, Jim/nn6ee _______________________________________________ Elecraft mailing list Post to: [hidden email] You must be a subscriber to post to the list. Subscriber Info (Addr. Change, sub, unsub etc.): http://mailman.qth.net/mailman/listinfo/elecraft Help: http://mailman.qth.net/subscribers.htm Elecraft web page: http://www.elecraft.com |
|
In reply to this post by wayne burdick
I've used the CW decoder ring function on my K3 a few times, but that's not why I bought the rig. Good code, with a likewise good S/N ratio, yields good machine copy. So does my brain to some extent. The spoilers are those ops that send CW in a continuous unbroken stream, and high noise levels relative to the CW signal. The former apparently like sending a stream via a key, or more likely a keyboard, and hopefully there's a very experienced op or PC program at the other end to make sense of it all. Some folks talk that way as well. The challenge in a high noise environment is the ability to set the signal threshold just above the noise to prevent the generation of random extraterrestrial code = E's and T's. Yes I use and peak the K3's noise blankers, and sometimes the NR with a wide filter, but it's those brief noise pops that bleed through that ruin the CW soup. Somehow improving that aspect will improve the machine copy I believe. 73 Gary NL7Y |
| Free forum by Nabble | Edit this page |
