The Fast Fourier Transform Chapitre 12 - Analog Devices - Revenir à l'accueil

 

 

Branding Farnell element14 (France)

 

Farnell Element 14 :

Miniature

Everything You Need To Know About Arduino

Miniature

Tutorial 01 for Arduino: Getting Acquainted with Arduino

Miniature

The Cube® 3D Printer

Miniature

What's easier- DIY Dentistry or our new our website features?

 

Miniature

Ben Heck's Getting Started with the BeagleBone Black Trailer

Miniature

Ben Heck's Home-Brew Solder Reflow Oven 2.0 Trailer

Miniature

Get Started with Pi Episode 3 - Online with Raspberry Pi

Miniature

Discover Simulink Promo -- Exclusive element14 Webinar

Miniature

Ben Heck's TV Proximity Sensor Trailer

Miniature

Ben Heck's PlayStation 4 Teardown Trailer

See the trailer for the next exciting episode of The Ben Heck show. Check back on Friday to be among the first to see the exclusive full show on element…

Miniature

Get Started with Pi Episode 4 - Your First Raspberry Pi Project

Connect your Raspberry Pi to a breadboard, download some code and create a push-button audio play project.

Miniature

Ben Heck Anti-Pickpocket Wallet Trailer

Miniature

Molex Earphones - The 14 Holiday Products of Newark element14 Promotion

Miniature

Tripp Lite Surge Protector - The 14 Holiday Products of Newark element14 Promotion

Miniature

Microchip ChipKIT Pi - The 14 Holiday Products of Newark element14 Promotion

Miniature

Beagle Bone Black - The 14 Holiday Products of Newark element14 Promotion

Miniature

3M E26, LED Lamps - The 14 Holiday Products of Newark element14 Promotion

Miniature

3M Colored Duct Tape - The 14 Holiday Products of Newark element14 Promotion

Miniature

Tenma Soldering Station - The 14 Holiday Products of Newark element14 Promotion

Miniature

Duratool Screwdriver Kit - The 14 Holiday Products of Newark element14 Promotion

Miniature

Cubify 3D Cube - The 14 Holiday Products of Newark element14 Promotion

Miniature

Bud Boardganizer - The 14 Holiday Products of Newark element14 Promotion

Miniature

Raspberry Pi Starter Kit - The 14 Holiday Products of Newark element14 Promotion

Miniature

Fluke 323 True-rms Clamp Meter - The 14 Holiday Products of Newark element14 Promotion

Miniature

Dymo RHINO 6000 Label Printer - The 14 Holiday Products of Newark element14 Promotion

Miniature

3M LED Advanced Lights A-19 - The 14 Holiday Products of Newark element14 Promotion

Miniature

Innovative LPS Resistor Features Very High Power Dissipation

Miniature

Charge Injection Evaluation Board for DG508B Multiplexer Demo

Miniature

Ben Heck The Great Glue Gun Trailer Part 2

Miniature

Introducing element14 TV

Miniature

Ben Heck Time to Meet Your Maker Trailer

Miniature

Détecteur de composants

Miniature

Recherche intégrée

Miniature

Ben Builds an Accessibility Guitar Trailer Part 1

Miniature

Ben Builds an Accessibility Guitar - Part 2 Trailer

Miniature

PiFace Control and Display Introduction

Miniature

Flashmob Farnell

Miniature

Express Yourself in 3D with Cube 3D Printers from Newark element14

Miniature

Farnell YouTube Channel Move

Miniature

Farnell: Design with the best

Miniature

French Farnell Quest

Miniature

Altera - 3 Ways to Quickly Adapt to Changing Ethernet Protocols

Miniature

Cy-Net3 Network Module

Miniature

MC AT - Professional and Precision Series Thin Film Chip Resistors

Miniature

Solderless LED Connector

Miniature

PSA-T Series Spectrum Analyser: PSA1301T/ PSA2701T

Miniature

3-axis Universal Motion Controller For Stepper Motor Drivers: TMC429

Miniature

Voltage Level Translation

Puce électronique / Microchip :

Miniature

Microchip - 8-bit Wireless Development Kit

Miniature

Microchip - Introduction to mTouch Capacitive Touch Sensing Part 2 of 3

Miniature

Microchip - Introduction to mTouch Capacitive Touch Sensing Part 3 of 3

Miniature

Microchip - Introduction to mTouch Capacitive Touch Sensing Part 1 of 3

Sans fil - Wireless :

Miniature

Microchip - 8-bit Wireless Development Kit

Miniature

Wireless Power Solutions - Wurth Electronics, Texas Instruments, CadSoft and element14

Miniature

Analog Devices - Remote Water Quality Monitoring via a Low Power, Wireless Network

Texas instrument :

Miniature

Texas Instruments - Automotive LED Headlights

Miniature

Texas Instruments - Digital Power Solutions

Miniature

Texas Instruments - Industrial Sensor Solutions

Miniature

Texas Instruments - Wireless Pen Input Demo (Mobile World Congress)

Miniature

Texas Instruments - Industrial Automation System Components

Miniature

Texas Instruments - TMS320C66x - Industry's first 10-GHz fixed/floating point DSP

Miniature

Texas Instruments - TMS320C66x KeyStone Multicore Architecture

Miniature

Texas Instruments - Industrial Interfaces

Miniature

Texas Instruments - Concerto™ MCUs - Connectivity without compromise

Miniature

Texas Instruments - Stellaris Robot Chronos

Miniature

Texas Instruments - DRV8412-C2-KIT, Brushed DC and Stepper Motor Control Kit

Ordinateurs :

Miniature

Ask Ben Heck - Connect Raspberry Pi to Car Computer

Miniature

Ben's Portable Raspberry Pi Computer Trailer

Miniature

Ben's Raspberry Pi Portable Computer Trailer 2

Miniature

Ben Heck's Pocket Computer Trailer

Miniature

Ask Ben Heck - Atari Computer

Miniature

Ask Ben Heck - Using Computer Monitors for External Displays

Miniature

Raspberry Pi Partnership with BBC Computer Literacy Project - Answers from co-founder Eben Upton

Miniature

Installing RaspBMC on your Raspberry Pi with the Farnell element14 Accessory kit

Miniature

Raspberry Pi Served - Joey Hudy

Miniature

Happy Birthday Raspberry Pi

Miniature

Raspberry Pi board B product overview

Logiciels :

Miniature

Ask Ben Heck - Best Opensource or Free CAD Software

Miniature

Tektronix FPGAView™ software makes debugging of FPGAs faster than ever!

Miniature

Ask Ben Heck - Best Open-Source Schematic Capture and PCB Layout Software

Miniature

Introduction to Cadsoft EAGLE PCB Design Software in Chinese

Miniature

Altera - Developing Software for Embedded Systems on FPGAs

Tutoriels :

Miniature

Ben Heck The Great Glue Gun Trailer Part 1

Miniature

the knode tutorial - element14

Miniature

Ben's Autodesk 123D Tutorial Trailer

Miniature

Ben's CadSoft EAGLE Tutorial Trailer

Miniature

Ben Heck's Soldering Tutorial Trailer

Miniature

Ben Heck's AVR Dev Board tutorial

Miniature

Ben Heck's Pinball Tutorial Trailer

Miniature

Ben Heck's Interface Tutorial Trailer

Miniature

First Stage with Python and PiFace Digital

Miniature

Cypress - Getting Started with PSoC® 3 - Part 2

Miniature

Energy Harvesting Challenge

Miniature

New Features of CadSoft EAGLE v6

Autres documentations :

[TXT]

 Farnell-NA555-NE555-..> 08-Sep-2014 07:33  1.5M  

[TXT]

 Farnell-AD9834-Rev-D..> 08-Sep-2014 07:32  1.2M  

[TXT]

 Farnell-MSP430F15x-M..> 08-Sep-2014 07:32  1.3M  

[TXT]

 Farnell-AD736-Rev-I-..> 08-Sep-2014 07:31  1.3M  

[TXT]

 Farnell-AD8307-Data-..> 08-Sep-2014 07:30  1.3M  

[TXT]

 Farnell-Single-Chip-..> 08-Sep-2014 07:30  1.5M  

[TXT]

 Farnell-Quadruple-2-..> 08-Sep-2014 07:29  1.5M  

[TXT]

 Farnell-ADE7758-Rev-..> 08-Sep-2014 07:28  1.7M  

[TXT]

 Farnell-MAX3221-Rev-..> 08-Sep-2014 07:28  1.8M  

[TXT]

 Farnell-USB-to-Seria..> 08-Sep-2014 07:27  2.0M  

[TXT]

 Farnell-AD8313-Analo..> 08-Sep-2014 07:26  2.0M  

[TXT]

 Farnell-SN54HC164-SN..> 08-Sep-2014 07:25  2.0M  

[TXT]

 Farnell-AD8310-Analo..> 08-Sep-2014 07:24  2.1M  

[TXT]

 Farnell-AD8361-Rev-D..> 08-Sep-2014 07:23  2.1M  

[TXT]

 Farnell-2N3906-Fairc..> 08-Sep-2014 07:22  2.1M  

[TXT]

 Farnell-AD584-Rev-C-..> 08-Sep-2014 07:20  2.2M  

[TXT]

 Farnell-ADE7753-Rev-..> 08-Sep-2014 07:20  2.3M  

[TXT]

 Farnell-TLV320AIC23B..> 08-Sep-2014 07:18  2.4M  

[TXT]

 Farnell-AD586BRZ-Ana..> 08-Sep-2014 07:17  1.6M  

[TXT]

 Farnell-STM32F405xxS..> 27-Aug-2014 18:27  1.8M 
 Farnell-MSP430-Hardw..> 29-Jul-2014 10:36  1.1M  

[TXT]

 Farnell-LM324-Texas-..> 29-Jul-2014 10:32  1.5M  

[TXT]

 Farnell-LM386-Low-Vo..> 29-Jul-2014 10:32  1.5M  

[TXT]

 Farnell-NE5532-Texas..> 29-Jul-2014 10:32  1.5M  

[TXT]

 Farnell-Hex-Inverter..> 29-Jul-2014 10:31  875K  

[TXT]

 Farnell-AT90USBKey-H..> 29-Jul-2014 10:31  902K  

[TXT]

 Farnell-AT89C5131-Ha..> 29-Jul-2014 10:31  1.2M  

[TXT]

 Farnell-MSP-EXP430F5..> 29-Jul-2014 10:31  1.2M  

[TXT]

 Farnell-Explorer-16-..> 29-Jul-2014 10:31  1.3M  

[TXT]

 Farnell-TMP006EVM-Us..> 29-Jul-2014 10:30  1.3M  

[TXT]

 Farnell-Gertboard-Us..> 29-Jul-2014 10:30  1.4M  

[TXT]

 Farnell-LMP91051-Use..> 29-Jul-2014 10:30  1.4M  

[TXT]

 Farnell-Thermometre-..> 29-Jul-2014 10:30  1.4M  

[TXT]

 Farnell-user-manuel-..> 29-Jul-2014 10:29  1.5M  

[TXT]

 Farnell-fx-3650P-fx-..> 29-Jul-2014 10:29  1.5M

[TXT]

 Farnell-2-GBPS-Diffe..> 28-Jul-2014 17:42  2.7M  

[TXT]

 Farnell-LMT88-2.4V-1..> 28-Jul-2014 17:42  2.8M  

[TXT]

 Farnell-Octal-Genera..> 28-Jul-2014 17:42  2.8M  

[TXT]

 Farnell-Dual-MOSFET-..> 28-Jul-2014 17:41  2.8M  

[TXT]

 Farnell-TLV320AIC325..> 28-Jul-2014 17:41  2.9M  

[TXT]

 Farnell-SN54LV4053A-..> 28-Jul-2014 17:20  5.9M  

[TXT]

 Farnell-TAS1020B-USB..> 28-Jul-2014 17:19  6.2M  

[TXT]

 Farnell-TPS40060-Wid..> 28-Jul-2014 17:19  6.3M  

[TXT]

 Farnell-TL082-Wide-B..> 28-Jul-2014 17:16  6.3M  

[TXT]

 Farnell-RF-short-tra..> 28-Jul-2014 17:16  6.3M  

[TXT]

 Farnell-maxim-integr..> 28-Jul-2014 17:14  6.4M  

[TXT]

 Farnell-TSV6390-TSV6..> 28-Jul-2014 17:14  6.4M  

[TXT]

 Farnell-Fast-Charge-..> 28-Jul-2014 17:12  6.4M  

[TXT]

 Farnell-NVE-datashee..> 28-Jul-2014 17:12  6.5M  

[TXT]

 Farnell-Excalibur-Hi..> 28-Jul-2014 17:10  2.4M  

[TXT]

 Farnell-Excalibur-Hi..> 28-Jul-2014 17:10  2.4M  

[TXT]

 Farnell-REF102-10V-P..> 28-Jul-2014 17:09  2.4M  

[TXT]

 Farnell-TMS320F28055..> 28-Jul-2014 17:09  2.7M

[TXT]

 Farnell-MULTICOMP-Ra..> 22-Jul-2014 12:35  5.9M  

[TXT]

 Farnell-RASPBERRY-PI..> 22-Jul-2014 12:35  5.9M  

[TXT]

 Farnell-Dremel-Exper..> 22-Jul-2014 12:34  1.6M  

[TXT]

 Farnell-STM32F103x8-..> 22-Jul-2014 12:33  1.6M  

[TXT]

 Farnell-BD6xxx-PDF.htm  22-Jul-2014 12:33  1.6M  

[TXT]

 Farnell-L78S-STMicro..> 22-Jul-2014 12:32  1.6M  

[TXT]

 Farnell-RaspiCam-Doc..> 22-Jul-2014 12:32  1.6M  

[TXT]

 Farnell-SB520-SB5100..> 22-Jul-2014 12:32  1.6M  

[TXT]

 Farnell-iServer-Micr..> 22-Jul-2014 12:32  1.6M  

[TXT]

 Farnell-LUMINARY-MIC..> 22-Jul-2014 12:31  3.6M  

[TXT]

 Farnell-TEXAS-INSTRU..> 22-Jul-2014 12:31  2.4M  

[TXT]

 Farnell-TEXAS-INSTRU..> 22-Jul-2014 12:30  4.6M  

[TXT]

 Farnell-CLASS 1-or-2..> 22-Jul-2014 12:30  4.7M  

[TXT]

 Farnell-TEXAS-INSTRU..> 22-Jul-2014 12:29  4.8M  

[TXT]

 Farnell-Evaluating-t..> 22-Jul-2014 12:28  4.9M  

[TXT]

 Farnell-LM3S6952-Mic..> 22-Jul-2014 12:27  5.9M  

[TXT]

 Farnell-Keyboard-Mou..> 22-Jul-2014 12:27  5.9M 

 [TXT] Farnell-Full-Datashe..> 15-Jul-2014 17:08 951K  

[TXT]

 Farnell-pmbta13_pmbt..> 15-Jul-2014 17:06  959K  

[TXT]

 Farnell-EE-SPX303N-4..> 15-Jul-2014 17:06  969K  

[TXT]

 Farnell-Datasheet-NX..> 15-Jul-2014 17:06  1.0M  

[TXT]

 Farnell-Datasheet-Fa..> 15-Jul-2014 17:05  1.0M  

[TXT]

 Farnell-MIDAS-un-tra..> 15-Jul-2014 17:05  1.0M  

[TXT]

 Farnell-SERIAL-TFT-M..> 15-Jul-2014 17:05  1.0M  

[TXT]

 Farnell-MCOC1-Farnel..> 15-Jul-2014 17:05  1.0M

[TXT]

 Farnell-TMR-2-series..> 15-Jul-2014 16:48  787K  

[TXT]

 Farnell-DC-DC-Conver..> 15-Jul-2014 16:48  781K  

[TXT]

 Farnell-Full-Datashe..> 15-Jul-2014 16:47  803K  

[TXT]

 Farnell-TMLM-Series-..> 15-Jul-2014 16:47  810K  

[TXT]

 Farnell-TEL-5-Series..> 15-Jul-2014 16:47  814K  

[TXT]

 Farnell-TXL-series-t..> 15-Jul-2014 16:47  829K  

[TXT]

 Farnell-TEP-150WI-Se..> 15-Jul-2014 16:47  837K  

[TXT]

 Farnell-AC-DC-Power-..> 15-Jul-2014 16:47  845K  

[TXT]

 Farnell-TIS-Instruct..> 15-Jul-2014 16:47  845K  

[TXT]

 Farnell-TOS-tracopow..> 15-Jul-2014 16:47  852K  

[TXT]

 Farnell-TCL-DC-traco..> 15-Jul-2014 16:46  858K  

[TXT]

 Farnell-TIS-series-t..> 15-Jul-2014 16:46  875K  

[TXT]

 Farnell-TMR-2-Series..> 15-Jul-2014 16:46  897K  

[TXT]

 Farnell-TMR-3-WI-Ser..> 15-Jul-2014 16:46  939K  

[TXT]

 Farnell-TEN-8-WI-Ser..> 15-Jul-2014 16:46  939K  

[TXT]

 Farnell-Full-Datashe..> 15-Jul-2014 16:46  947K
[TXT]

 Farnell-HIP4081A-Int..> 07-Jul-2014 19:47  1.0M  

[TXT]

 Farnell-ISL6251-ISL6..> 07-Jul-2014 19:47  1.1M  

[TXT]

 Farnell-DG411-DG412-..> 07-Jul-2014 19:47  1.0M  

[TXT]

 Farnell-3367-ARALDIT..> 07-Jul-2014 19:46  1.2M  

[TXT]

 Farnell-ICM7228-Inte..> 07-Jul-2014 19:46  1.1M  

[TXT]

 Farnell-Data-Sheet-K..> 07-Jul-2014 19:46  1.2M  

[TXT]

 Farnell-Silica-Gel-M..> 07-Jul-2014 19:46  1.2M  

[TXT]

 Farnell-TKC2-Dusters..> 07-Jul-2014 19:46  1.2M  

[TXT]

 Farnell-CRC-HANDCLEA..> 07-Jul-2014 19:46  1.2M  

[TXT]

 Farnell-760G-French-..> 07-Jul-2014 19:45  1.2M  

[TXT]

 Farnell-Decapant-KF-..> 07-Jul-2014 19:45  1.2M  

[TXT]

 Farnell-1734-ARALDIT..> 07-Jul-2014 19:45  1.2M  

[TXT]

 Farnell-Araldite-Fus..> 07-Jul-2014 19:45  1.2M  

[TXT]

 Farnell-fiche-de-don..> 07-Jul-2014 19:44  1.4M  

[TXT]

 Farnell-safety-data-..> 07-Jul-2014 19:44  1.4M  

[TXT]

 Farnell-A-4-Hardener..> 07-Jul-2014 19:44  1.4M  

[TXT]

 Farnell-CC-Debugger-..> 07-Jul-2014 19:44  1.5M  

[TXT]

 Farnell-MSP430-Hardw..> 07-Jul-2014 19:43  1.8M  

[TXT]

 Farnell-SmartRF06-Ev..> 07-Jul-2014 19:43  1.6M  

[TXT]

 Farnell-CC2531-USB-H..> 07-Jul-2014 19:43  1.8M  

[TXT]

 Farnell-Alimentation..> 07-Jul-2014 19:43  1.8M  

[TXT]

 Farnell-BK889B-PONT-..> 07-Jul-2014 19:42  1.8M  

[TXT]

 Farnell-User-Guide-M..> 07-Jul-2014 19:41  2.0M  

[TXT]

 Farnell-T672-3000-Se..> 07-Jul-2014 19:41  2.0M

 [TXT]Farnell-0050375063-D..> 18-Jul-2014 17:03 2.5M  

[TXT]

 Farnell-Mini-Fit-Jr-..> 18-Jul-2014 17:03  2.5M  

[TXT]

 Farnell-43031-0002-M..> 18-Jul-2014 17:03  2.5M  

[TXT]

 Farnell-0433751001-D..> 18-Jul-2014 17:02  2.5M  

[TXT]

 Farnell-Cube-3D-Prin..> 18-Jul-2014 17:02  2.5M  

[TXT]

 Farnell-MTX-Compact-..> 18-Jul-2014 17:01  2.5M  

[TXT]

 Farnell-MTX-3250-MTX..> 18-Jul-2014 17:01  2.5M  

[TXT]

 Farnell-ATtiny26-L-A..> 18-Jul-2014 17:00  2.6M  

[TXT]

 Farnell-MCP3421-Micr..> 18-Jul-2014 17:00  1.2M  

[TXT]

 Farnell-LM19-Texas-I..> 18-Jul-2014 17:00  1.2M  

[TXT]

 Farnell-Data-Sheet-S..> 18-Jul-2014 17:00  1.2M  

[TXT]

 Farnell-LMH6518-Texa..> 18-Jul-2014 16:59  1.3M  

[TXT]

 Farnell-AD7719-Low-V..> 18-Jul-2014 16:59  1.4M  

[TXT]

 Farnell-DAC8143-Data..> 18-Jul-2014 16:59  1.5M  

[TXT]

 Farnell-BGA7124-400-..> 18-Jul-2014 16:59  1.5M  

[TXT]

 Farnell-SICK-OPTIC-E..> 18-Jul-2014 16:58  1.5M  

[TXT]

 Farnell-LT3757-Linea..> 18-Jul-2014 16:58  1.6M  

[TXT]

 Farnell-LT1961-Linea..> 18-Jul-2014 16:58  1.6M  

[TXT]

 Farnell-PIC18F2420-2..> 18-Jul-2014 16:57  2.5M  

[TXT]

 Farnell-DS3231-DS-PD..> 18-Jul-2014 16:57  2.5M  

[TXT]

 Farnell-RDS-80-PDF.htm  18-Jul-2014 16:57  1.3M  

[TXT]

 Farnell-AD8300-Data-..> 18-Jul-2014 16:56  1.3M  

[TXT]

 Farnell-LT6233-Linea..> 18-Jul-2014 16:56  1.3M  

[TXT]

 Farnell-MAX1365-MAX1..> 18-Jul-2014 16:56  1.4M  

[TXT]

 Farnell-XPSAF5130-PD..> 18-Jul-2014 16:56  1.4M  

[TXT]

 Farnell-DP83846A-DsP..> 18-Jul-2014 16:55  1.5M  

[TXT]

 Farnell-Dremel-Exper..> 18-Jul-2014 16:55  1.6M

[TXT]

 Farnell-MCOC1-Farnel..> 16-Jul-2014 09:04  1.0M  

[TXT]

 Farnell-SL3S1203_121..> 16-Jul-2014 09:04  1.1M  

[TXT]

 Farnell-PN512-Full-N..> 16-Jul-2014 09:03  1.4M  

[TXT]

 Farnell-SL3S4011_402..> 16-Jul-2014 09:03  1.1M  

[TXT]

 Farnell-LPC408x-7x 3..> 16-Jul-2014 09:03  1.6M  

[TXT]

 Farnell-PCF8574-PCF8..> 16-Jul-2014 09:03  1.7M  

[TXT]

 Farnell-LPC81xM-32-b..> 16-Jul-2014 09:02  2.0M  

[TXT]

 Farnell-LPC1769-68-6..> 16-Jul-2014 09:02  1.9M  

[TXT]

 Farnell-Download-dat..> 16-Jul-2014 09:02  2.2M  

[TXT]

 Farnell-LPC3220-30-4..> 16-Jul-2014 09:02  2.2M  

[TXT]

 Farnell-LPC11U3x-32-..> 16-Jul-2014 09:01  2.4M  

[TXT]

 Farnell-SL3ICS1002-1..> 16-Jul-2014 09:01  2.5M

[TXT]

 Farnell-T672-3000-Se..> 08-Jul-2014 18:59  2.0M  

[TXT]

 Farnell-tesa®pack63..> 08-Jul-2014 18:56  2.0M  

[TXT]

 Farnell-Encodeur-USB..> 08-Jul-2014 18:56  2.0M  

[TXT]

 Farnell-CC2530ZDK-Us..> 08-Jul-2014 18:55  2.1M  

[TXT]

 Farnell-2020-Manuel-..> 08-Jul-2014 18:55  2.1M  

[TXT]

 Farnell-Synchronous-..> 08-Jul-2014 18:54  2.1M  

[TXT]

 Farnell-Arithmetic-L..> 08-Jul-2014 18:54  2.1M  

[TXT]

 Farnell-NA555-NE555-..> 08-Jul-2014 18:53  2.2M  

[TXT]

 Farnell-4-Bit-Magnit..> 08-Jul-2014 18:53  2.2M  

[TXT]

 Farnell-LM555-Timer-..> 08-Jul-2014 18:53  2.2M  

[TXT]

 Farnell-L293d-Texas-..> 08-Jul-2014 18:53  2.2M  

[TXT]

 Farnell-SN54HC244-SN..> 08-Jul-2014 18:52  2.3M  

[TXT]

 Farnell-MAX232-MAX23..> 08-Jul-2014 18:52  2.3M  

[TXT]

 Farnell-High-precisi..> 08-Jul-2014 18:51  2.3M  

[TXT]

 Farnell-SMU-Instrume..> 08-Jul-2014 18:51  2.3M  

[TXT]

 Farnell-900-Series-B..> 08-Jul-2014 18:50  2.3M  

[TXT]

 Farnell-BA-Series-Oh..> 08-Jul-2014 18:50  2.3M  

[TXT]

 Farnell-UTS-Series-S..> 08-Jul-2014 18:49  2.5M  

[TXT]

 Farnell-270-Series-O..> 08-Jul-2014 18:49  2.3M  

[TXT]

 Farnell-UTS-Series-S..> 08-Jul-2014 18:49  2.8M  

[TXT]

 Farnell-Tiva-C-Serie..> 08-Jul-2014 18:49  2.6M  

[TXT]

 Farnell-UTO-Souriau-..> 08-Jul-2014 18:48  2.8M  

[TXT]

 Farnell-Clipper-Seri..> 08-Jul-2014 18:48  2.8M  

[TXT]

 Farnell-SOURIAU-Cont..> 08-Jul-2014 18:47  3.0M  

[TXT]

 Farnell-851-Series-P..> 08-Jul-2014 18:47  3.0M

 [TXT] Farnell-SL59830-Inte..> 06-Jul-2014 10:07 1.0M  

[TXT]

 Farnell-ALF1210-PDF.htm 06-Jul-2014 10:06  4.0M  

[TXT]

 Farnell-AD7171-16-Bi..> 06-Jul-2014 10:06  1.0M  

[TXT]

 Farnell-Low-Noise-24..> 06-Jul-2014 10:05  1.0M  

[TXT]

 Farnell-ESCON-Featur..> 06-Jul-2014 10:05  938K  

[TXT]

 Farnell-74LCX573-Fai..> 06-Jul-2014 10:05  1.9M  

[TXT]

 Farnell-1N4148WS-Fai..> 06-Jul-2014 10:04  1.9M  

[TXT]

 Farnell-FAN6756-Fair..> 06-Jul-2014 10:04  850K  

[TXT]

 Farnell-Datasheet-Fa..> 06-Jul-2014 10:04  861K  

[TXT]

 Farnell-ES1F-ES1J-fi..> 06-Jul-2014 10:04  867K  

[TXT]

 Farnell-QRE1113-Fair..> 06-Jul-2014 10:03  879K  

[TXT]

 Farnell-2N7002DW-Fai..> 06-Jul-2014 10:03  886K  

[TXT]

 Farnell-FDC2512-Fair..> 06-Jul-2014 10:03  886K  

[TXT]

 Farnell-FDV301N-Digi..> 06-Jul-2014 10:03  886K  

[TXT]

 Farnell-S1A-Fairchil..> 06-Jul-2014 10:03  896K  

[TXT]

 Farnell-BAV99-Fairch..> 06-Jul-2014 10:03  896K  

[TXT]

 Farnell-74AC00-74ACT..> 06-Jul-2014 10:03  911K  

[TXT]

 Farnell-NaPiOn-Panas..> 06-Jul-2014 10:02  911K  

[TXT]

 Farnell-LQ-RELAYS-AL..> 06-Jul-2014 10:02  924K  

[TXT]

 Farnell-ev-relays-ae..> 06-Jul-2014 10:02  926K  

[TXT]

 Farnell-ESCON-Featur..> 06-Jul-2014 10:02  931K  

[TXT]

 Farnell-Amplifier-In..> 06-Jul-2014 10:02  940K  

[TXT]

 Farnell-Serial-File-..> 06-Jul-2014 10:02  941K  

[TXT]

 Farnell-Both-the-Del..> 06-Jul-2014 10:01  948K  

[TXT]

 Farnell-Videk-PDF.htm   06-Jul-2014 10:01  948K  

[TXT]

 Farnell-EPCOS-173438..> 04-Jul-2014 10:43  3.3M  

[TXT]

 Farnell-Sensorless-C..> 04-Jul-2014 10:42  3.3M  

[TXT]

 Farnell-197.31-KB-Te..> 04-Jul-2014 10:42  3.3M  

[TXT]

 Farnell-PIC12F609-61..> 04-Jul-2014 10:41  3.7M  

[TXT]

 Farnell-PADO-semi-au..> 04-Jul-2014 10:41  3.7M  

[TXT]

 Farnell-03-iec-runds..> 04-Jul-2014 10:40  3.7M  

[TXT]

 Farnell-ACC-Silicone..> 04-Jul-2014 10:40  3.7M  

[TXT]

 Farnell-Series-TDS10..> 04-Jul-2014 10:39  4.0M 

[TXT]

 Farnell-03-iec-runds..> 04-Jul-2014 10:40  3.7M  

[TXT]

 Farnell-0430300011-D..> 14-Jun-2014 18:13  2.0M  

[TXT]

 Farnell-06-6544-8-PD..> 26-Mar-2014 17:56  2.7M  

[TXT]

 Farnell-3M-Polyimide..> 21-Mar-2014 08:09  3.9M  

[TXT]

 Farnell-3M-VolitionT..> 25-Mar-2014 08:18  3.3M  

[TXT]

 Farnell-10BQ060-PDF.htm 14-Jun-2014 09:50  2.4M  

[TXT]

 Farnell-10TPB47M-End..> 14-Jun-2014 18:16  3.4M  

[TXT]

 Farnell-12mm-Size-In..> 14-Jun-2014 09:50  2.4M  

[TXT]

 Farnell-24AA024-24LC..> 23-Jun-2014 10:26  3.1M  

[TXT]

 Farnell-50A-High-Pow..> 20-Mar-2014 17:31  2.9M  

[TXT]

 Farnell-197.31-KB-Te..> 04-Jul-2014 10:42  3.3M  

[TXT]

 Farnell-1907-2006-PD..> 26-Mar-2014 17:56  2.7M  

[TXT]

 Farnell-5910-PDF.htm    25-Mar-2014 08:15  3.0M  

[TXT]

 Farnell-6517b-Electr..> 29-Mar-2014 11:12  3.3M  

[TXT]

 Farnell-A-True-Syste..> 29-Mar-2014 11:13  3.3M  

[TXT]

 Farnell-ACC-Silicone..> 04-Jul-2014 10:40  3.7M  

[TXT]

 Farnell-AD524-PDF.htm   20-Mar-2014 17:33  2.8M  

[TXT]

 Farnell-ADL6507-PDF.htm 14-Jun-2014 18:19  3.4M  

[TXT]

 Farnell-ADSP-21362-A..> 20-Mar-2014 17:34  2.8M  

[TXT]

 Farnell-ALF1210-PDF.htm 04-Jul-2014 10:39  4.0M  

[TXT]

 Farnell-ALF1225-12-V..> 01-Apr-2014 07:40  3.4M  

[TXT]

 Farnell-ALF2412-24-V..> 01-Apr-2014 07:39  3.4M  

[TXT]

 Farnell-AN10361-Phil..> 23-Jun-2014 10:29  2.1M  

[TXT]

 Farnell-ARADUR-HY-13..> 26-Mar-2014 17:55  2.8M  

[TXT]

 Farnell-ARALDITE-201..> 21-Mar-2014 08:12  3.7M  

[TXT]

 Farnell-ARALDITE-CW-..> 26-Mar-2014 17:56  2.7M  

[TXT]

 Farnell-ATMEL-8-bit-..> 19-Mar-2014 18:04  2.1M  

[TXT]

 Farnell-ATMEL-8-bit-..> 11-Mar-2014 07:55  2.1M  

[TXT]

 Farnell-ATmega640-VA..> 14-Jun-2014 09:49  2.5M  

[TXT]

 Farnell-ATtiny20-PDF..> 25-Mar-2014 08:19  3.6M  

[TXT]

 Farnell-ATtiny26-L-A..> 13-Jun-2014 18:40  1.8M  

[TXT]

 Farnell-Alimentation..> 14-Jun-2014 18:24  2.5M  

[TXT]

 Farnell-Alimentation..> 01-Apr-2014 07:42  3.4M  

[TXT]

 Farnell-Amplificateu..> 29-Mar-2014 11:11  3.3M  

[TXT]

 Farnell-An-Improved-..> 14-Jun-2014 09:49  2.5M  

[TXT]

 Farnell-Atmel-ATmega..> 19-Mar-2014 18:03  2.2M  

[TXT]

 Farnell-Avvertenze-e..> 14-Jun-2014 18:20  3.3M  

[TXT]

 Farnell-BC846DS-NXP-..> 13-Jun-2014 18:42  1.6M  

[TXT]

 Farnell-BC847DS-NXP-..> 23-Jun-2014 10:24  3.3M  

[TXT]

 Farnell-BF545A-BF545..> 23-Jun-2014 10:28  2.1M  

[TXT]

 Farnell-BK2650A-BK26..> 29-Mar-2014 11:10  3.3M  

[TXT]

 Farnell-BT151-650R-N..> 13-Jun-2014 18:40  1.7M  

[TXT]

 Farnell-BTA204-800C-..> 13-Jun-2014 18:42  1.6M  

[TXT]

 Farnell-BUJD203AX-NX..> 13-Jun-2014 18:41  1.7M  

[TXT]

 Farnell-BYV29F-600-N..> 13-Jun-2014 18:42  1.6M  

[TXT]

 Farnell-BYV79E-serie..> 10-Mar-2014 16:19  1.6M  

[TXT]

 Farnell-BZX384-serie..> 23-Jun-2014 10:29  2.1M  

[TXT]

 Farnell-Battery-GBA-..> 14-Jun-2014 18:13  2.0M  

[TXT]

 Farnell-C.A-6150-C.A..> 14-Jun-2014 18:24  2.5M  

[TXT]

 Farnell-C.A 8332B-C...> 01-Apr-2014 07:40  3.4M  

[TXT]

 Farnell-CC2560-Bluet..> 29-Mar-2014 11:14  2.8M  

[TXT]

 Farnell-CD4536B-Type..> 14-Jun-2014 18:13  2.0M  

[TXT]

 Farnell-CIRRUS-LOGIC..> 10-Mar-2014 17:20  2.1M  

[TXT]

 Farnell-CS5532-34-BS..> 01-Apr-2014 07:39  3.5M  

[TXT]

 Farnell-Cannon-ZD-PD..> 11-Mar-2014 08:13  2.8M  

[TXT]

 Farnell-Ceramic-tran..> 14-Jun-2014 18:19  3.4M  

[TXT]

 Farnell-Circuit-Note..> 26-Mar-2014 18:00  2.8M  

[TXT]

 Farnell-Circuit-Note..> 26-Mar-2014 18:00  2.8M  

[TXT]

 Farnell-Cles-electro..> 21-Mar-2014 08:13  3.9M  

[TXT]

 Farnell-Conception-d..> 11-Mar-2014 07:49  2.4M  

[TXT]

 Farnell-Connectors-N..> 14-Jun-2014 18:12  2.1M  

[TXT]

 Farnell-Construction..> 14-Jun-2014 18:25  2.5M  

[TXT]

 Farnell-Controle-de-..> 11-Mar-2014 08:16  2.8M  

[TXT]

 Farnell-Cordless-dri..> 14-Jun-2014 18:13  2.0M  

[TXT]

 Farnell-Current-Tran..> 26-Mar-2014 17:58  2.7M  

[TXT]

 Farnell-Current-Tran..> 26-Mar-2014 17:58  2.7M  

[TXT]

 Farnell-Current-Tran..> 26-Mar-2014 17:59  2.7M  

[TXT]

 Farnell-Current-Tran..> 26-Mar-2014 17:59  2.7M  

[TXT]

 Farnell-DC-Fan-type-..> 14-Jun-2014 09:48  2.5M  

[TXT]

 Farnell-DC-Fan-type-..> 14-Jun-2014 09:51  1.8M  

[TXT]

 Farnell-Davum-TMC-PD..> 14-Jun-2014 18:27  2.4M  

[TXT]

 Farnell-De-la-puissa..> 29-Mar-2014 11:10  3.3M  

[TXT]

 Farnell-Directive-re..> 25-Mar-2014 08:16  3.0M  

[TXT]

 Farnell-Documentatio..> 14-Jun-2014 18:26  2.5M  

[TXT]

 Farnell-Download-dat..> 13-Jun-2014 18:40  1.8M  

[TXT]

 Farnell-ECO-Series-T..> 20-Mar-2014 08:14  2.5M  

[TXT]

 Farnell-ELMA-PDF.htm    29-Mar-2014 11:13  3.3M  

[TXT]

 Farnell-EMC1182-PDF.htm 25-Mar-2014 08:17  3.0M  

[TXT]

 Farnell-EPCOS-173438..> 04-Jul-2014 10:43  3.3M  

[TXT]

 Farnell-EPCOS-Sample..> 11-Mar-2014 07:53  2.2M  

[TXT]

 Farnell-ES2333-PDF.htm  11-Mar-2014 08:14  2.8M  

[TXT]

 Farnell-Ed.081002-DA..> 19-Mar-2014 18:02  2.5M  

[TXT]

 Farnell-F28069-Picco..> 14-Jun-2014 18:14  2.0M  

[TXT]

 Farnell-F42202-PDF.htm  19-Mar-2014 18:00  2.5M  

[TXT]

 Farnell-FDS-ITW-Spra..> 14-Jun-2014 18:22  3.3M  

[TXT]

 Farnell-FICHE-DE-DON..> 10-Mar-2014 16:17  1.6M  

[TXT]

 Farnell-Fastrack-Sup..> 23-Jun-2014 10:25  3.3M  

[TXT]

 Farnell-Ferric-Chlor..> 29-Mar-2014 11:14  2.8M  

[TXT]

 Farnell-Fiche-de-don..> 14-Jun-2014 09:47  2.5M  

[TXT]

 Farnell-Fiche-de-don..> 14-Jun-2014 18:26  2.5M  

[TXT]

 Farnell-Fluke-1730-E..> 14-Jun-2014 18:23  2.5M  

[TXT]

 Farnell-GALVA-A-FROI..> 26-Mar-2014 17:56  2.7M  

[TXT]

 Farnell-GALVA-MAT-Re..> 26-Mar-2014 17:57  2.7M  

[TXT]

 Farnell-GN-RELAYS-AG..> 20-Mar-2014 08:11  2.6M  

[TXT]

 Farnell-HC49-4H-Crys..> 14-Jun-2014 18:20  3.3M  

[TXT]

 Farnell-HFE1600-Data..> 14-Jun-2014 18:22  3.3M  

[TXT]

 Farnell-HI-70300-Sol..> 14-Jun-2014 18:27  2.4M  

[TXT]

 Farnell-HUNTSMAN-Adv..> 10-Mar-2014 16:17  1.7M  

[TXT]

 Farnell-Haute-vitess..> 11-Mar-2014 08:17  2.4M  

[TXT]

 Farnell-IP4252CZ16-8..> 13-Jun-2014 18:41  1.7M  

[TXT]

 Farnell-Instructions..> 19-Mar-2014 18:01  2.5M  

[TXT]

 Farnell-KSZ8851SNL-S..> 23-Jun-2014 10:28  2.1M  

[TXT]

 Farnell-L-efficacite..> 11-Mar-2014 07:52  2.3M  

[TXT]

 Farnell-LCW-CQ7P.CC-..> 25-Mar-2014 08:19  3.2M  

[TXT]

 Farnell-LME49725-Pow..> 14-Jun-2014 09:49  2.5M  

[TXT]

 Farnell-LOCTITE-542-..> 25-Mar-2014 08:15  3.0M  

[TXT]

 Farnell-LOCTITE-3463..> 25-Mar-2014 08:19  3.0M  

[TXT]

 Farnell-LUXEON-Guide..> 11-Mar-2014 07:52  2.3M  

[TXT]

 Farnell-Leaded-Trans..> 23-Jun-2014 10:26  3.2M  

[TXT]

 Farnell-Les-derniers..> 11-Mar-2014 07:50  2.3M  

[TXT]

 Farnell-Loctite3455-..> 25-Mar-2014 08:16  3.0M  

[TXT]

 Farnell-Low-cost-Enc..> 13-Jun-2014 18:42  1.7M  

[TXT]

 Farnell-Lubrifiant-a..> 26-Mar-2014 18:00  2.7M  

[TXT]

 Farnell-MC3510-PDF.htm  25-Mar-2014 08:17  3.0M  

[TXT]

 Farnell-MC21605-PDF.htm 11-Mar-2014 08:14  2.8M  

[TXT]

 Farnell-MCF532x-7x-E..> 29-Mar-2014 11:14  2.8M  

[TXT]

 Farnell-MICREL-KSZ88..> 11-Mar-2014 07:54  2.2M  

[TXT]

 Farnell-MICROCHIP-PI..> 19-Mar-2014 18:02  2.5M  

[TXT]

 Farnell-MOLEX-39-00-..> 10-Mar-2014 17:19  1.9M  

[TXT]

 Farnell-MOLEX-43020-..> 10-Mar-2014 17:21  1.9M  

[TXT]

 Farnell-MOLEX-43160-..> 10-Mar-2014 17:21  1.9M  

[TXT]

 Farnell-MOLEX-87439-..> 10-Mar-2014 17:21  1.9M  

[TXT]

 Farnell-MPXV7002-Rev..> 20-Mar-2014 17:33  2.8M  

[TXT]

 Farnell-MX670-MX675-..> 14-Jun-2014 09:46  2.5M  

[TXT]

 Farnell-Microchip-MC..> 13-Jun-2014 18:27  1.8M  

[TXT]

 Farnell-Microship-PI..> 11-Mar-2014 07:53  2.2M  

[TXT]

 Farnell-Midas-Active..> 14-Jun-2014 18:17  3.4M  

[TXT]

 Farnell-Midas-MCCOG4..> 14-Jun-2014 18:11  2.1M  

[TXT]

 Farnell-Miniature-Ci..> 26-Mar-2014 17:55  2.8M  

[TXT]

 Farnell-Mistral-PDF.htm 14-Jun-2014 18:12  2.1M  

[TXT]

 Farnell-Molex-83421-..> 14-Jun-2014 18:17  3.4M  

[TXT]

 Farnell-Molex-COMMER..> 14-Jun-2014 18:16  3.4M  

[TXT]

 Farnell-Molex-Crimp-..> 10-Mar-2014 16:27  1.7M  

[TXT]

 Farnell-Multi-Functi..> 20-Mar-2014 17:38  3.0M  

[TXT]

 Farnell-NTE_SEMICOND..> 11-Mar-2014 07:52  2.3M  

[TXT]

 Farnell-NXP-74VHC126..> 10-Mar-2014 16:17  1.6M  

[TXT]

 Farnell-NXP-BT136-60..> 11-Mar-2014 07:52  2.3M  

[TXT]

 Farnell-NXP-PBSS9110..> 10-Mar-2014 17:21  1.9M  

[TXT]

 Farnell-NXP-PCA9555 ..> 11-Mar-2014 07:54  2.2M  

[TXT]

 Farnell-NXP-PMBFJ620..> 10-Mar-2014 16:16  1.7M  

[TXT]

 Farnell-NXP-PSMN1R7-..> 10-Mar-2014 16:17  1.6M  

[TXT]

 Farnell-NXP-PSMN7R0-..> 10-Mar-2014 17:19  2.1M  

[TXT]

 Farnell-NXP-TEA1703T..> 11-Mar-2014 08:15  2.8M  

[TXT]

 Farnell-Nilfi-sk-E-..> 14-Jun-2014 09:47  2.5M  

[TXT]

 Farnell-Novembre-201..> 20-Mar-2014 17:38  3.3M  

[TXT]

 Farnell-OMRON-Master..> 10-Mar-2014 16:26  1.8M  

[TXT]

 Farnell-OSLON-SSL-Ce..> 19-Mar-2014 18:03  2.1M  

[TXT]

 Farnell-OXPCIE958-FB..> 13-Jun-2014 18:40  1.8M  

[TXT]

 Farnell-PADO-semi-au..> 04-Jul-2014 10:41  3.7M  

[TXT]

 Farnell-PBSS5160T-60..> 19-Mar-2014 18:03  2.1M  

[TXT]

 Farnell-PDTA143X-ser..> 20-Mar-2014 08:12  2.6M  

[TXT]

 Farnell-PDTB123TT-NX..> 13-Jun-2014 18:43  1.5M  

[TXT]

 Farnell-PESD5V0F1BL-..> 13-Jun-2014 18:43  1.5M  

[TXT]

 Farnell-PESD9X5.0L-P..> 13-Jun-2014 18:43  1.6M  

[TXT]

 Farnell-PIC12F609-61..> 04-Jul-2014 10:41  3.7M  

[TXT]

 Farnell-PIC18F2455-2..> 23-Jun-2014 10:27  3.1M  

[TXT]

 Farnell-PIC24FJ256GB..> 14-Jun-2014 09:51  2.4M  

[TXT]

 Farnell-PMBT3906-PNP..> 13-Jun-2014 18:44  1.5M  

[TXT]

 Farnell-PMBT4403-PNP..> 23-Jun-2014 10:27  3.1M  

[TXT]

 Farnell-PMEG4002EL-N..> 14-Jun-2014 18:18  3.4M  

[TXT]

 Farnell-PMEG4010CEH-..> 13-Jun-2014 18:43  1.6M  

[TXT]

 Farnell-Panasonic-15..> 23-Jun-2014 10:29  2.1M  

[TXT]

 Farnell-Panasonic-EC..> 20-Mar-2014 17:36  2.6M  

[TXT]

 Farnell-Panasonic-EZ..> 20-Mar-2014 08:10  2.6M  

[TXT]

 Farnell-Panasonic-Id..> 20-Mar-2014 17:35  2.6M  

[TXT]

 Farnell-Panasonic-Ne..> 20-Mar-2014 17:36  2.6M  

[TXT]

 Farnell-Panasonic-Ra..> 20-Mar-2014 17:37  2.6M  

[TXT]

 Farnell-Panasonic-TS..> 20-Mar-2014 08:12  2.6M  

[TXT]

 Farnell-Panasonic-Y3..> 20-Mar-2014 08:11  2.6M  

[TXT]

 Farnell-Pico-Spox-Wi..> 10-Mar-2014 16:16  1.7M  

[TXT]

 Farnell-Pompes-Charg..> 24-Apr-2014 20:23  3.3M  

[TXT]

 Farnell-Ponts-RLC-po..> 14-Jun-2014 18:23  3.3M  

[TXT]

 Farnell-Portable-Ana..> 29-Mar-2014 11:16  2.8M  

[TXT]

 Farnell-Premier-Farn..> 21-Mar-2014 08:11  3.8M  

[TXT]

 Farnell-Produit-3430..> 14-Jun-2014 09:48  2.5M  

[TXT]

 Farnell-Proskit-SS-3..> 10-Mar-2014 16:26  1.8M  

[TXT]

 Farnell-Puissance-ut..> 11-Mar-2014 07:49  2.4M  

[TXT]

 Farnell-Q48-PDF.htm     23-Jun-2014 10:29  2.1M  

[TXT]

 Farnell-Radial-Lead-..> 20-Mar-2014 08:12  2.6M  

[TXT]

 Farnell-Realiser-un-..> 11-Mar-2014 07:51  2.3M  

[TXT]

 Farnell-Reglement-RE..> 21-Mar-2014 08:08  3.9M  

[TXT]

 Farnell-Repartiteurs..> 14-Jun-2014 18:26  2.5M  

[TXT]

 Farnell-S-TRI-SWT860..> 21-Mar-2014 08:11  3.8M  

[TXT]

 Farnell-SB175-Connec..> 11-Mar-2014 08:14  2.8M  

[TXT]

 Farnell-SMBJ-Transil..> 29-Mar-2014 11:12  3.3M  

[TXT]

 Farnell-SOT-23-Multi..> 11-Mar-2014 07:51  2.3M  

[TXT]

 Farnell-SPLC780A1-16..> 14-Jun-2014 18:25  2.5M  

[TXT]

 Farnell-SSC7102-Micr..> 23-Jun-2014 10:25  3.2M  

[TXT]

 Farnell-SVPE-series-..> 14-Jun-2014 18:15  2.0M  

[TXT]

 Farnell-Sensorless-C..> 04-Jul-2014 10:42  3.3M  

[TXT]

 Farnell-Septembre-20..> 20-Mar-2014 17:46  3.7M  

[TXT]

 Farnell-Serie-PicoSc..> 19-Mar-2014 18:01  2.5M  

[TXT]

 Farnell-Serie-Standa..> 14-Jun-2014 18:23  3.3M  

[TXT]

 Farnell-Series-2600B..> 20-Mar-2014 17:30  3.0M  

[TXT]

 Farnell-Series-TDS10..> 04-Jul-2014 10:39  4.0M  

[TXT]

 Farnell-Signal-PCB-R..> 14-Jun-2014 18:11  2.1M  

[TXT]

 Farnell-Strangkuhlko..> 21-Mar-2014 08:09  3.9M  

[TXT]

 Farnell-Supercapacit..> 26-Mar-2014 17:57  2.7M  

[TXT]

 Farnell-TDK-Lambda-H..> 14-Jun-2014 18:21  3.3M  

[TXT]

 Farnell-TEKTRONIX-DP..> 10-Mar-2014 17:20  2.0M  

[TXT]

 Farnell-Tektronix-AC..> 13-Jun-2014 18:44  1.5M  

[TXT]

 Farnell-Telemetres-l..> 20-Mar-2014 17:46  3.7M  

[TXT]

 Farnell-Termometros-..> 14-Jun-2014 18:14  2.0M  

[TXT]

 Farnell-The-essentia..> 10-Mar-2014 16:27  1.7M  

[TXT]

 Farnell-U2270B-PDF.htm  14-Jun-2014 18:15  3.4M  

[TXT]

 Farnell-USB-Buccanee..> 14-Jun-2014 09:48  2.5M  

[TXT]

 Farnell-USB1T11A-PDF..> 19-Mar-2014 18:03  2.1M  

[TXT]

 Farnell-V4N-PDF.htm     14-Jun-2014 18:11  2.1M  

[TXT]

 Farnell-WetTantalum-..> 11-Mar-2014 08:14  2.8M  

[TXT]

 Farnell-XPS-AC-Octop..> 14-Jun-2014 18:11  2.1M  

[TXT]

 Farnell-XPS-MC16-XPS..> 11-Mar-2014 08:15  2.8M  

[TXT]

 Farnell-YAGEO-DATA-S..> 11-Mar-2014 08:13  2.8M  

[TXT]

 Farnell-ZigBee-ou-le..> 11-Mar-2014 07:50  2.4M  

[TXT]

 Farnell-celpac-SUL84..> 21-Mar-2014 08:11  3.8M  

[TXT]

 Farnell-china_rohs_o..> 21-Mar-2014 10:04  3.9M  

[TXT]

 Farnell-cree-Xlamp-X..> 20-Mar-2014 17:34  2.8M  

[TXT]

 Farnell-cree-Xlamp-X..> 20-Mar-2014 17:35  2.7M  

[TXT]

 Farnell-cree-Xlamp-X..> 20-Mar-2014 17:31  2.9M  

[TXT]

 Farnell-cree-Xlamp-m..> 20-Mar-2014 17:32  2.9M  

[TXT]

 Farnell-cree-Xlamp-m..> 20-Mar-2014 17:32  2.9M  

[TXT]

 Farnell-ir1150s_fr.p..> 29-Mar-2014 11:11  3.3M  

[TXT]

 Farnell-manual-bus-p..> 10-Mar-2014 16:29  1.9M  

[TXT]

 Farnell-propose-plus..> 11-Mar-2014 08:19  2.8M  

[TXT]

 Farnell-techfirst_se..> 21-Mar-2014 08:08  3.9M  

[TXT]

 Farnell-testo-205-20..> 20-Mar-2014 17:37  3.0M  

[TXT]

 Farnell-testo-470-Fo..> 20-Mar-2014 17:38  3.0M  

[TXT]

 Farnell-uC-OS-III-Br..> 10-Mar-2014 17:20  2.0M  

[TXT]

 Sefram-7866HD.pdf-PD..> 29-Mar-2014 11:46  472K  

[TXT]

 Sefram-CAT_ENREGISTR..> 29-Mar-2014 11:46  461K  

[TXT]

 Sefram-CAT_MESUREURS..> 29-Mar-2014 11:46  435K  

[TXT]

 Sefram-GUIDE_SIMPLIF..> 29-Mar-2014 11:46  481K  

[TXT]

 Sefram-GUIDE_SIMPLIF..> 29-Mar-2014 11:46  442K  

[TXT]

 Sefram-GUIDE_SIMPLIF..> 29-Mar-2014 11:46  422K  

[TXT]

 Sefram-SP270.pdf-PDF..> 29-Mar-2014 11:46  464K
CHAPTER 12 The Fast Fourier Transform There are several ways to calculate the Discrete Fourier Transform (DFT), such as solving simultaneous linear equations or the correlation method described in Chapter 8. The Fast Fourier Transform (FFT) is another method for calculating the DFT. While it produces the same result as the other approaches, it is incredibly more efficient, often reducing the computation time by hundreds. This is the same improvement as flying in a jet aircraft versus walking! If the FFT were not available, many of the techniques described in this book would not be practical. While the FFT only requires a few dozen lines of code, it is one of the most complicated algorithms in DSP. But don't despair! You can easily use published FFT routines without fully understanding the internal workings. Real DFT Using the Complex DFT J.W. Cooley and J.W. Tukey are given credit for bringing the FFT to the world in their paper: "An algorithm for the machine calculation of complex Fourier Series," Mathematics Computation, Vol. 19, 1965, pp 297-301. In retrospect, others had discovered the technique many years before. For instance, the great German mathematician Karl Friedrich Gauss (1777-1855) had used the method more than a century earlier. This early work was largely forgotten because it lacked the tool to make it practical: the digital computer. Cooley and Tukey are honored because they discovered the FFT at the right time, the beginning of the computer revolution. The FFT is based on the complex DFT, a more sophisticated version of the real DFT discussed in the last four chapters. These transforms are named for the way each represents data, that is, using complex numbers or using real numbers. The term complex does not mean that this representation is difficult or complicated, but that a specific type of mathematics is used. Complex mathematics often is difficult and complicated, but that isn't where the name comes from. Chapter 29 discusses the complex DFT and provides the background needed to understand the details of the FFT algorithm. The 226 The Scientist and Engineer's Guide to Digital Signal Processing FIGURE 12-1 Comparing the real and complex DFTs. The real DFT takes an N point time domain signal and creates two N/2% 1 point frequency domain signals. The complex DFT takes two N point time domain signals and creates two N point frequency domain signals. The crosshatched regions shows the values common to the two transforms. Real DFT Complex DFT Time Domain Time Domain Frequency Domain Frequency Domain 0 N-1 0 N-1 0 N-1 0 N/2 0 N/2 0 0 N-1 N-1 N/2 N/2 Real Part Imaginary Part Real Part Imaginary Part Real Part Imaginary Part Time Domain Signal topic of this chapter is simpler: how to use the FFT to calculate the real DFT, without drowning in a mire of advanced mathematics. Since the FFT is an algorithm for calculating the complex DFT, it is important to understand how to transfer real DFT data into and out of the complex DFT format. Figure 12-1 compares how the real DFT and the complex DFT store data. The real DFT transforms an N point time domain signal into two N/2 % 1 point frequency domain signals. The time domain signal is called just that: the time domain signal. The two signals in the frequency domain are called the real part and the imaginary part, holding the amplitudes of the cosine waves and sine waves, respectively. This should be very familiar from past chapters. In comparison, the complex DFT transforms two N point time domain signals into two N point frequency domain signals. The two time domain signals are called the real part and the imaginary part, just as are the frequency domain signals. In spite of their names, all of the values in these arrays are just ordinary numbers. (If you are familiar with complex numbers: the j's are not included in the array values; they are a part of the mathematics. Recall that the operator, Im( ), returns a real number). Chapter 12- The Fast Fourier Transform 227 6000 'NEGATIVE FREQUENCY GENERATION 6010 'This subroutine creates the complex frequency domain from the real frequency domain. 6020 'Upon entry to this subroutine, N% contains the number of points in the signals, and 6030 'REX[ ] and IMX[ ] contain the real frequency domain in samples 0 to N%/2. 6040 'On return, REX[ ] and IMX[ ] contain the complex frequency domain in samples 0 to N%-1. 6050 ' 6060 FOR K% = (N%/2+1) TO (N%-1) 6070 REX[K%] = REX[N%-K%] 6080 IMX[K%] = -IMX[N%-K%] 6090 NEXT K% 6100 ' 6110 RETURN TABLE 12-1 Suppose you have an N point signal, and need to calculate the real DFT by means of the Complex DFT (such as by using the FFT algorithm). First, move the N point signal into the real part of the complex DFT's time domain, and then set all of the samples in the imaginary part to zero. Calculation of the complex DFT results in a real and an imaginary signal in the frequency domain, each composed of N points. Samples 0 through N/2 of these signals correspond to the real DFT's spectrum. As discussed in Chapter 10, the DFT's frequency domain is periodic when the negative frequencies are included (see Fig. 10-9). The choice of a single period is arbitrary; it can be chosen between -1.0 and 0, -0.5 and 0.5, 0 and 1.0, or any other one unit interval referenced to the sampling rate. The complex DFT's frequency spectrum includes the negative frequencies in the 0 to 1.0 arrangement. In other words, one full period stretches from sample 0 to sample N&1 , corresponding with 0 to 1.0 times the sampling rate. The positive frequencies sit between sample 0 and N/2 , corresponding with 0 to 0.5. The other samples, between N/2% 1 and N&1 , contain the negative frequency values (which are usually ignored). Calculating a real Inverse DFT using a complex Inverse DFT is slightly harder. This is because you need to insure that the negative frequencies are loaded in the proper format. Remember, points 0 through N/2 in the complex DFT are the same as in the real DFT, for both the real and the imaginary parts. For the real part, point N/2% 1 is the same as point N/2& 1 , point N/2% 2 is the same as point N/2& 2 , etc. This continues to point N&1 being the same as point 1. The same basic pattern is used for the imaginary part, except the sign is changed. That is, point N/2% 1 is the negative of point N/2& 1 , point N/2% 2 is the negative of point N/2& 2 , etc. Notice that samples 0 and N/2 do not have a matching point in this duplication scheme. Use Fig. 10-9 as a guide to understanding this symmetry. In practice, you load the real DFT's frequency spectrum into samples 0 to N/2 of the complex DFT's arrays, and then use a subroutine to generate the negative frequencies between samples N/2% 1 and N&1 . Table 12-1 shows such a program. To check that the proper symmetry is present, after taking the inverse FFT, look at the imaginary part of the time domain. It will contain all zeros if everything is correct (except for a few parts-permillion of noise, using single precision calculations). 228 The Scientist and Engineer's Guide to Digital Signal Processing FIGURE 12-2 The FFT decomposition. An N point signal is decomposed into N signals each containing a single point. Each stage uses an interlace decomposition, separating the even and odd numbered samples. 1 signal of 16 points 2 signals of 8 points 4 signals of 4 points 8 signals of 2 points 16 signals of 1 point 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0 2 4 6 8 10 12 14 1 3 5 7 9 11 13 15 0 4 8 12 2 6 10 14 1 5 9 13 3 7 11 15 0 8 4 12 2 10 6 14 1 9 5 13 3 11 7 15 0 8 4 12 2 10 6 14 1 9 5 13 3 11 7 15 How the FFT works The FFT is a complicated algorithm, and its details are usually left to those that specialize in such things. This section describes the general operation of the FFT, but skirts a key issue: the use of complex numbers. If you have a background in complex mathematics, you can read between the lines to understand the true nature of the algorithm. Don't worry if the details elude you; few scientists and engineers that use the FFT could write the program from scratch. In complex notation, the time and frequency domains each contain one signal made up of N complex points. Each of these complex points is composed of two numbers, the real part and the imaginary part. For example, when we talk about complex sample X[42] , it refers to the combination of ReX[42] and ImX[42]. In other words, each complex variable holds two numbers. When two complex variables are multiplied, the four individual components must be combined to form the two components of the product (such as in Eq. 9-1). The following discussion on "How the FFT works" uses this jargon of complex notation. That is, the singular terms: signal, point, sample, and value, refer to the combination of the real part and the imaginary part. The FFT operates by decomposing an N point time domain signal into N time domain signals each composed of a single point. The second step is to calculate the N frequency spectra corresponding to these N time domain signals. Lastly, the N spectra are synthesized into a single frequency spectrum. Figure 12-2 shows an example of the time domain decomposition used in the FFT. In this example, a 16 point signal is decomposed through four Chapter 12- The Fast Fourier Transform 229 Sample numbers Sample numbers in normal order after bit reversal Decimal Binary Decimal Binary 0 0000 0 0000 1 0001 8 1000 2 0010 4 0100 3 0011 12 1100 4 0100 2 0010 5 0101 10 1010 6 0110 6 0100 7 0111 14 1110 8 1000 1 0001 9 1001 9 1001 10 1010 5 0101 11 1011 13 1101 12 1100 3 0011 13 1101 11 1011 14 1110 7 0111 15 1111 15 1111 FIGURE 12-3 The FFT bit reversal sorting. The FFT time domain decomposition can be implemented by sorting the samples according to bit reversed order. separate stages. The first stage breaks the 16 point signal into two signals each consisting of 8 points. The second stage decomposes the data into four signals of 4 points. This pattern continues until there are N signals composed of a single point. An interlaced decomposition is used each time a signal is broken in two, that is, the signal is separated into its even and odd numbered samples. The best way to understand this is by inspecting Fig. 12-2 until you grasp the pattern. There are Log stages required in this decomposition, i.e., 2N a 16 point signal (24) requires 4 stages, a 512 point signal (27) requires 7 stages, a 4096 point signal (212) requires 12 stages, etc. Remember this value, Log ; it will be referenced many times in this chapter. 2N Now that you understand the structure of the decomposition, it can be greatly simplified. The decomposition is nothing more than a reordering of the samples in the signal. Figure 12-3 shows the rearrangement pattern required. On the left, the sample numbers of the original signal are listed along with their binary equivalents. On the right, the rearranged sample numbers are listed, also along with their binary equivalents. The important idea is that the binary numbers are the reversals of each other. For example, sample 3 (0011) is exchanged with sample number 12 (1100). Likewise, sample number 14 (1110) is swapped with sample number 7 (0111), and so forth. The FFT time domain decomposition is usually carried out by a bit reversal sorting algorithm. This involves rearranging the order of the N time domain samples by counting in binary with the bits flipped left-for-right (such as in the far right column in Fig. 12-3). 230 The Scientist and Engineer's Guide to Digital Signal Processing a b c d a 0 b 0 c 0 d 0 A B C D A B C D A B C D e f g h 0 e 0 f 0 g 0 h E F G H F G H E F G H × sinusoid Time Domain Frequency Domain E FIGURE 12-4 The FFT synthesis. When a time domain signal is diluted with zeros, the frequency domain is duplicated. If the time domain signal is also shifted by one sample during the dilution, the spectrum will additionally be multiplied by a sinusoid. The next step in the FFT algorithm is to find the frequency spectra of the 1 point time domain signals. Nothing could be easier; the frequency spectrum of a 1 point signal is equal to itself. This means that nothing is required to do this step. Although there is no work involved, don't forget that each of the 1 point signals is now a frequency spectrum, and not a time domain signal. The last step in the FFT is to combine the N frequency spectra in the exact reverse order that the time domain decomposition took place. This is where the algorithm gets messy. Unfortunately, the bit reversal shortcut is not applicable, and we must go back one stage at a time. In the first stage, 16 frequency spectra (1 point each) are synthesized into 8 frequency spectra (2 points each). In the second stage, the 8 frequency spectra (2 points each) are synthesized into 4 frequency spectra (4 points each), and so on. The last stage results in the output of the FFT, a 16 point frequency spectrum. Figure 12-4 shows how two frequency spectra, each composed of 4 points, are combined into a single frequency spectrum of 8 points. This synthesis must undo the interlaced decomposition done in the time domain. In other words, the frequency domain operation must correspond to the time domain procedure of combining two 4 point signals by interlacing. Consider two time domain signals, abcd and efgh. An 8 point time domain signal can be formed by two steps: dilute each 4 point signal with zeros to make it an Chapter 12- The Fast Fourier Transform 231 + + + + + + + + Eight Point Frequency Spectrum Odd- Four Point Frequency Spectrum Even- Four Point Frequency Spectrum xS xS xS xS FIGURE 12-5 FFT synthesis flow diagram. This shows the method of combining two 4 point frequency spectra into a single 8 point frequency spectrum. The ×S operation means that the signal is multiplied by a sinusoid with an appropriately selected frequency. 2 point input 2 point output xS FIGURE 12-6 The FFT butterfly. This is the basic calculation element in the FFT, taking two complex points and converting them into two other complex points. 8 point signal, and then add the signals together. That is, abcd becomes a0b0c0d0, and efgh becomes 0e0f0g0h. Adding these two 8 point signals produces aebfcgdh. As shown in Fig. 12-4, diluting the time domain with zeros corresponds to a duplication of the frequency spectrum. Therefore, the frequency spectra are combined in the FFT by duplicating them, and then adding the duplicated spectra together. In order to match up when added, the two time domain signals are diluted with zeros in a slightly different way. In one signal, the odd points are zero, while in the other signal, the even points are zero. In other words, one of the time domain signals (0e0f0g0h in Fig. 12-4) is shifted to the right by one sample. This time domain shift corresponds to multiplying the spectrum by a sinusoid. To see this, recall that a shift in the time domain is equivalent to convolving the signal with a shifted delta function. This multiplies the signal's spectrum with the spectrum of the shifted delta function. The spectrum of a shifted delta function is a sinusoid (see Fig 11-2). Figure 12-5 shows a flow diagram for combining two 4 point spectra into a single 8 point spectrum. To reduce the situation even more, notice that Fig. 12- 5 is formed from the basic pattern in Fig 12-6 repeated over and over. 232 The Scientist and Engineer's Guide to Digital Signal Processing Time Domain Data Frequency Domain Data Bit Reversal Data Sorting Overhead Overhead Calculation Decomposition Synthesis Time Domain Frequency Domain Butterfly FIGURE 12-7 Flow diagram of the FFT. This is based on three steps: (1) decompose an N point time domain signal into N signals each containing a single point, (2) find the spectrum of each of the N point signals (nothing required), and (3) synthesize the N frequency spectra into a single frequency spectrum. Loop for each Butterfly Loop for Leach sub-DFT Loop for Log2N stages This simple flow diagram is called a butterfly due to its winged appearance. The butterfly is the basic computational element of the FFT, transforming two complex points into two other complex points. Figure 12-7 shows the structure of the entire FFT. The time domain decomposition is accomplished with a bit reversal sorting algorithm. Transforming the decomposed data into the frequency domain involves nothing and therefore does not appear in the figure. The frequency domain synthesis requires three loops. The outer loop runs through the Log stages (i.e., each level in Fig. 12-2, starting from the bottom 2N and moving to the top). The middle loop moves through each of the individual frequency spectra in the stage being worked on (i.e., each of the boxes on any one level in Fig. 12-2). The innermost loop uses the butterfly to calculate the points in each frequency spectra (i.e., looping through the samples inside any one box in Fig. 12-2). The overhead boxes in Fig. 12-7 determine the beginning and ending indexes for the loops, as well as calculating the sinusoids needed in the butterflies. Now we come to the heart of this chapter, the actual FFT programs. Chapter 12- The Fast Fourier Transform 233 5000 'COMPLEX DFT BY CORRELATION 5010 'Upon entry, N% contains the number of points in the DFT, and 5020 'XR[ ] and XI[ ] contain the real and imaginary parts of the time domain. 5030 'Upon return, REX[ ] and IMX[ ] contain the frequency domain data. 5040 'All signals run from 0 to N%-1. 5050 ' 5060 PI = 3.14159265 'Set constants 5070 ' 5080 FOR K% = 0 TO N%-1 'Zero REX[ ] and IMX[ ], so they can be used 5090 REX[K%] = 0 'as accumulators during the correlation 5100 IMX[K%] = 0 5110 NEXT K% 5120 ' 5130 FOR K% = 0 TO N%-1 'Loop for each value in frequency domain 5140 FOR I% = 0 TO N%-1 'Correlate with the complex sinusoid, SR & SI 5150 ' 5160 SR = COS(2*PI*K%*I%/N%) 'Calculate complex sinusoid 5170 SI = -SIN(2*PI*K%*I%/N%) 5180 REX[K%] = REX[K%] + XR[I%]*SR - XI[I%]*SI 5190 IMX[K%] = IMX[K%] + XR[I%]*SI + XI[I%]*SR 5200 ' 5210 NEXT I% 5220 NEXT K% 5230 ' 5240 RETURN TABLE 12-2 FFT Programs As discussed in Chapter 8, the real DFT can be calculated by correlating the time domain signal with sine and cosine waves (see Table 8-2). Table 12-2 shows a program to calculate the complex DFT by the same method. In an apples-to-apples comparison, this is the program that the FFT improves upon. Tables 12-3 and 12-4 show two different FFT programs, one in FORTRAN and one in BASIC. First we will look at the BASIC routine in Table 12-4. This subroutine produces exactly the same output as the correlation technique in Table 12-2, except it does it much faster. The block diagram in Fig. 12-7 can be used to identify the different sections of this program. Data are passed to this FFT subroutine in the arrays: REX[ ] and IMX[ ], each running from sample 0 to N&1 . Upon return from the subroutine, REX[ ] and IMX[ ] are overwritten with the frequency domain data. This is another way that the FFT is highly optimized; the same arrays are used for the input, intermediate storage, and output. This efficient use of memory is important for designing fast hardware to calculate the FFT. The term in-place computation is used to describe this memory usage. While all FFT programs produce the same numerical result, there are subtle variations in programming that you need to look out for. Several of these 234 The Scientist and Engineer's Guide to Digital Signal Processing TABLE 12-3 The Fast Fourier Transform in FORTRAN. Data are passed to this subroutine in the variables X( ) and M. The integer, M, is the base two logarithm of the length of the DFT, i.e., M = 8 for a 256 point DFT, M = 12 for a 4096 point DFT, etc. The complex array, X( ), holds the time domain data upon entering the DFT. Upon return from this subroutine, X( ) is overwritten with the frequency domain data. Take note: this subroutine requires that the input and output signals run from X(1) through X(N), rather than the customary X(0) through X(N-1). SUBROUTINE FFT(X,M) COMPLEX X(4096),U,S,T PI=3.14159265 N=2**M DO 20 L=1,M LE=2**(M+1-L) LE2=LE/2 U=(1.0,0.0) S=CMPLX(COS(PI/FLOAT(LE2)),-SIN(PI/FLOAT(LE2))) DO 20 J=1,LE2 DO 10 I=J,N,LE IP=I+LE2 T=X(I)+X(IP) X(IP)=(X(I)-X(IP))*U 10 X(I)=T 20 U=U*S ND2=N/2 NM1=N-1 J=1 DO 50 I=1,NM1 IF(I.GE.J) GO TO 30 T=X(J) X(J)=X(I) X(I)=T 30 K=ND2 40 IF(K.GE.J) GO TO 50 J=J-K K=K/2 GO TO 40 50 J=J+K RETURN END of these differences are illustrated by the FORTRAN program listed in Table 12-3. This program uses an algorithm called decimation in frequency, while the previously described algorithm is called decimation in time. In a decimation in frequency algorithm, the bit reversal sorting is done after the three nested loops. There are also FFT routines that completely eliminate the bit reversal sorting. None of these variations significantly improve the performance of the FFT, and you shouldn't worry about which one you are using. The important differences between FFT algorithms concern how data are passed to and from the subroutines. In the BASIC program, data enter and leave the subroutine in the arrays REX[ ] and IMX[ ], with the samples running from index 0 to N&1 . In the FORTRAN program, data are passed in the complex array X( ), with the samples running from 1 to N. Since this is an array of complex variables, each sample in X( ) consists of two numbers, a real part and an imaginary part. The length of the DFT must also be passed to these subroutines. In the BASIC program, the variable N% is used for this purpose. In comparison, the FORTRAN program uses the variable M, which is defined to equal Log . For instance, M will be 2N Chapter 12- The Fast Fourier Transform 235 TABLE 12-4 The Fast Fourier Transform in BASIC. 1000 'THE FAST FOURIER TRANSFORM 1010 'Upon entry, N% contains the number of points in the DFT, REX[ ] and 1020 'IMX[ ] contain the real and imaginary parts of the input. Upon return, 1030 'REX[ ] and IMX[ ] contain the DFT output. All signals run from 0 to N%-1. 1040 ' 1050 PI = 3.14159265 'Set constants 1060 NM1% = N%-1 1070 ND2% = N%/2 1080 M% = CINT(LOG(N%)/LOG(2)) 1090 J% = ND2% 1100 ' 1110 FOR I% = 1 TO N%-2 'Bit reversal sorting 1120 IF I% >= J% THEN GOTO 1190 1130 TR = REX[J%] 1140 TI = IMX[J%] 1150 REX[J%] = REX[I%] 1160 IMX[J%] = IMX[I%] 1170 REX[I%] = TR 1180 IMX[I%] = TI 1190 K% = ND2% 1200 IF K% > J% THEN GOTO 1240 1210 J% = J%-K% 1220 K% = K%/2 1230 GOTO 1200 1240 J% = J%+K% 1250 NEXT I% 1260 ' 1270 FOR L% = 1 TO M% 'Loop for each stage 1280 LE% = CINT(2^L%) 1290 LE2% = LE%/2 1300 UR = 1 1310 UI = 0 1320 SR = COS(PI/LE2%) 'Calculate sine & cosine values 1330 SI = -SIN(PI/LE2%) 1340 FOR J% = 1 TO LE2% 'Loop for each sub DFT 1350 JM1% = J%-1 1360 FOR I% = JM1% TO NM1% STEP LE% 'Loop for each butterfly 1370 IP% = I%+LE2% 1380 TR = REX[IP%]*UR - IMX[IP%]*UI 'Butterfly calculation 1390 TI = REX[IP%]*UI + IMX[IP%]*UR 1400 REX[IP%] = REX[I%]-TR 1410 IMX[IP%] = IMX[I%]-TI 1420 REX[I%] = REX[I%]+TR 1430 IMX[I%] = IMX[I%]+TI 1440 NEXT I% 1450 TR = UR 1460 UR = TR*SR - UI*SI 1470 UI = TR*SI + UI*SR 1480 NEXT J% 1490 NEXT L% 1500 ' 1510 RETURN 236 The Scientist and Engineer's Guide to Digital Signal Processing 2000 'INVERSE FAST FOURIER TRANSFORM SUBROUTINE 2010 'Upon entry, N% contains the number of points in the IDFT, REX[ ] and 2020 'IMX[ ] contain the real and imaginary parts of the complex frequency domain. 2030 'Upon return, REX[ ] and IMX[ ] contain the complex time domain. 2040 'All signals run from 0 to N%-1. 2050 ' 2060 FOR K% = 0 TO N%-1 'Change the sign of IMX[ ] 2070 IMX[K%] = -IMX[K%] 2080 NEXT K% 2090 ' 2100 GOSUB 1000 'Calculate forward FFT (Table 12-3) 2110 ' 2120 FOR I% = 0 TO N%-1 'Divide the time domain by N% and 2130 REX[I%] = REX[I%]/N% 'change the sign of IMX[ ] 2140 IMX[I%] = -IMX[I%]/N% 2150 NEXT I% 2160 ' 2170 RETURN TABLE 12-5 8 for a 256 point DFT, 12 for a 4096 point DFT, etc. The point is, the programmer who writes an FFT subroutine has many options for interfacing with the host program. Arrays that run from 1 to N, such as in the FORTRAN program, are especially aggravating. Most of the DSP literature (including this book) explains algorithms assuming the arrays run from sample 0 to N&1 . For instance, if the arrays run from 1 to N, the symmetry in the frequency domain is around points 1 and N/2% 1 , rather than points 0 and N/2 , Using the complex DFT to calculate the real DFT has another interesting advantage. The complex DFT is more symmetrical between the time and frequency domains than the real DFT. That is, the duality is stronger. Among other things, this means that the Inverse DFT is nearly identical to the Forward DFT. In fact, the easiest way to calculate an Inverse FFT is to calculate a Forward FFT, and then adjust the data. Table 12-5 shows a subroutine for calculating the Inverse FFT in this manner. Suppose you copy one of these FFT algorithms into your computer program and start it running. How do you know if it is operating properly? Two tricks are commonly used for debugging. First, start with some arbitrary time domain signal, such as from a random number generator, and run it through the FFT. Next, run the resultant frequency spectrum through the Inverse FFT and compare the result with the original signal. They should be identical, except round-off noise (a few parts-per-million for single precision). The second test of proper operation is that the signals have the correct symmetry. When the imaginary part of the time domain signal is composed of all zeros (the normal case), the frequency domain of the complex DFT will be symmetrical around samples 0 and N/2 , as previously described. Chapter 12- The Fast Fourier Transform 237 EQUATION 12-1 DFT execution time. The time required to calculate a DFT by correlation is proportional to the length of the DFT squared. ExecutionTime ’ kDFT N2 EQUATION 12-2 FFT execution time. The time required to calculate a DFT using the FFT is proportional to N multiplied by the logarithm of N. ExecutionTime ’ kFFT N log2N Likewise, when this correct symmetry is present in the frequency domain, the Inverse DFT will produce a time domain that has an imaginary part composes of all zeros (plus round-off noise). These debugging techniques are essential for using the FFT; become familiar with them. Speed and Precision Comparisons When the DFT is calculated by correlation (as in Table 12-2), the program uses two nested loops, each running through N points. This means that the total number of operations is proportional to N times N. The time to complete the program is thus given by: where N is the number of points in the DFT and kDFT is a constant of proportionality. If the sine and cosine values are calculated within the nested loops, kDFT is equal to about 25 microseconds on a Pentium at 100 MHz. If you precalculate the sine and cosine values and store them in a look-up-table, kDFT drops to about 7 microseconds. For example, a 1024 point DFT will require about 25 seconds, or nearly 25 milliseconds per point. That's slow! Using this same strategy we can derive the execution time for the FFT. The time required for the bit reversal is negligible. In each of the Log stages 2N there are N/2 butterfly computations. This means the execution time for the program is approximated by: The value of kFFT is about 10 microseconds on a 100 MHz Pentium system. A 1024 point FFT requires about 70 milliseconds to execute, or 70 microseconds per point. This is more than 300 times faster than the DFT calculated by correlation! Not only is NLog less than , it increases much more slowly as N 2N N 2 becomes larger. For example, a 32 point FFT is about ten times faster than the correlation method. However, a 4096 point FFT is one-thousand times faster. For small values of N (say, 32 to 128), the FFT is important. For large values of N (1024 and above), the FFT is absolutely critical. Figure 12-8 compares the execution times of the two algorithms in a graphical form. 238 The Scientist and Engineer's Guide to Digital Signal Processing Number points in DFT 8 16 32 64 128 256 512 1024 2048 4096 0.001 0.01 0.1 1 10 100 1000 FFT correlation correlation w/LUT FIGURE 12-8 Execution times for calculating the DFT. The correlation method refers to the algorithm described in Table 12-2. This method can be made faster by precalculating the sine and cosine values and storing them in a look-up table (LUT). The FFT (Table 12-3) is the fastest algorithm when the DFT is greater than 16 points long. The times shown are for a Pentium processor at 100 MHz. Execution time (seconds) Number of points in DFT 16 32 64 128 256 512 1024 0 10 20 30 40 50 60 70 FFT correlation FIGURE 12-9 DFT precision. Since the FFT calculates the DFT faster than the correlation method, it also calculates it with less round-off error. Error (parts per million) The FFT has another advantage besides raw speed. The FFT is calculated more precisely because the fewer number of calculations results in less round-off error. This can be demonstrated by taking the FFT of an arbitrary signal, and then running the frequency spectrum through an Inverse FFT. This reconstructs the original time domain signal, except for the addition of roundoff noise from the calculations. A single number characterizing this noise can be obtained by calculating the standard deviation of the difference between the two signals. For comparison, this same procedure can be repeated using a DFT calculated by correlation, and a corresponding Inverse DFT. How does the round-off noise of the FFT compare to the DFT by correlation? See for yourself in Fig. 12-9. Further Speed Increases There are several techniques for making the FFT even faster; however, the improvements are only about 20-40%. In one of these methods, the time Chapter 12- The Fast Fourier Transform 239 4000 'INVERSE FFT FOR REAL SIGNALS 4010 'Upon entry, N% contains the number of points in the IDFT, REX[ ] and 4020 'IMX[ ] contain the real and imaginary parts of the frequency domain running from 4030 'index 0 to N%/2. The remaining samples in REX[ ] and IMX[ ] are ignored. 4040 'Upon return, REX[ ] contains the real time domain, IMX[ ] contains zeros. 4050 ' 4060 ' 4070 FOR K% = (N%/2+1) TO (N%-1) 'Make frequency domain symmetrical 4080 REX[K%] = REX[N%-K%] '(as in Table 12-1) 4090 IMX[K%] = -IMX[N%-K%] 4100 NEXT K% 4110 ' 4120 FOR K% = 0 TO N%-1 'Add real and imaginary parts together 4130 REX[K%] = REX[K%]+IMX[K%] 4140 NEXT K% 4150 ' 4160 GOSUB 3000 'Calculate forward real DFT (TABLE 12-6) 4170 ' 4180 FOR I% = 0 TO N%-1 'Add real and imaginary parts together 4190 REX[I%] = (REX[I%]+IMX[I%])/N% 'and divide the time domain by N% 4200 IMX[I%] = 0 4210 NEXT I% 4220 ' 4230 RETURN TABLE 12-6 domain decomposition is stopped two stages early, when each signals is composed of only four points. Instead of calculating the last two stages, highly optimized code is used to jump directly into the frequency domain, using the simplicity of four point sine and cosine waves. Another popular algorithm eliminates the wasted calculations associated with the imaginary part of the time domain being zero, and the frequency spectrum being symmetrical. In other words, the FFT is modified to calculate the real DFT, instead of the complex DFT. These algorithms are called the real FFT and the real Inverse FFT (or similar names). Expect them to be about 30% faster than the conventional FFT routines. Tables 12-6 and 12-7 show programs for these algorithms. There are two small disadvantages in using the real FFT. First, the code is about twice as long. While your computer doesn't care, you must take the time to convert someone else's program to run on your computer. Second, debugging these programs is slightly harder because you cannot use symmetry as a check for proper operation. These algorithms force the imaginary part of the time domain to be zero, and the frequency domain to have left-right symmetry. For debugging, check that these programs produce the same output as the conventional FFT algorithms. Figures 12-10 and 12-11 illustrate how the real FFT works. In Fig. 12-10, (a) and (b) show a time domain signal that consists of a pulse in the real part, and all zeros in the imaginary part. Figures (c) and (d) show the corresponding frequency spectrum. As previously described, the frequency domain's real part has an even symmetry around sample 0 and sample N/2 , while the imaginary part has an odd symmetry around these same points. 240 The Scientist and Engineer's Guide to Digital Signal Processing Sample number 0 16 32 48 64 -1 0 1 2 63 a. Real part Freqeuncy 0 16 32 48 -8 -4 0 4 8 c. Real part (even symmetry) 63 Frequency 0 16 32 48 -8 -4 0 4 8 d. Imaginary part (odd symmetry) 63 Time Domain Frequency Domain Sample number 0 16 32 48 64 -1 0 1 2 63 b. Imaginary part FIGURE 12-10 Real part symmetry of the DFT. Amplitude Amplitude Amplitude Amplitude Now consider Fig. 12-11, where the pulse is in the imaginary part of the time domain, and the real part is all zeros. The symmetry in the frequency domain is reversed; the real part is odd, while the imaginary part is even. This situation will be discussed in Chapter 29. For now, take it for granted that this is how the complex DFT behaves. What if there is a signal in both parts of the time domain? By additivity, the frequency domain will be the sum of the two frequency spectra. Now the key element: a frequency spectrum composed of these two types of symmetry can be perfectly separated into the two component signals. This is achieved by the even/odd decomposition discussed in Chapter 6. In other words, two real DFT's can be calculated for the price of single FFT. One of the signals is placed in the real part of the time domain, and the other signal is placed in the imaginary part. After calculating the complex DFT (via the FFT, of course), the spectra are separated using the even/odd decomposition. When two or more signals need to be passed through the FFT, this technique reduces the execution time by about 40%. The improvement isn't a full factor of two because of the calculation time required for the even/odd decomposition. This is a relatively simple technique with few pitfalls, nothing like writing an FFT routine from scratch. Chapter 12- The Fast Fourier Transform 241 Sample number 0 16 32 48 64 -1 0 1 2 63 a. Real part Frequency 0 16 32 48 -8 -4 0 4 8 c. Real part (odd symmetry) 63 Frequency 0 16 32 48 -8 -4 0 4 8 d. Imaginary part (even symmetry) 63 Time Domain Frequency Domain Sample number 0 16 32 48 64 -1 0 1 2 63 b. Imaginary part FIGURE 12-11 Imaginary part symmetry of the DFT. Amplitude Amplitude Amplitude Amplitude The next step is to modify the algorithm to calculate a single DFT faster. It's ugly, but here is how it is done. The input signal is broken in half by using an interlaced decomposition. The N/2 even points are placed into the real part of the time domain signal, while the N/2 odd points go into the imaginary part. An N/2 point FFT is then calculated, requiring about one-half the time as an N point FFT. The resulting frequency domain is then separated by the even/odd decomposition, resulting in the frequency spectra of the two interlaced time domain signals. These two frequency spectra are then combined into a single spectrum, just as in the last synthesis stage of the FFT. To close this chapter, consider that the FFT is to Digital Signal Processing what the transistor is to electronics. It is a foundation of the technology; everyone in the field knows its characteristics and how to use it. However, only a small number of specialists really understand the details of the internal workings. 242 The Scientist and Engineer's Guide to Digital Signal Processing 3000 'FFT FOR REAL SIGNALS 3010 'Upon entry, N% contains the number of points in the DFT, REX[ ] contains 3020 'the real input signal, while values in IMX[ ] are ignored. Upon return, 3030 'REX[ ] and IMX[ ] contain the DFT output. All signals run from 0 to N%-1. 3040 ' 3050 NH% = N%/2-1 'Separate even and odd points 3060 FOR I% = 0 TO NH% 3070 REX(I%) = REX(2*I%) 3080 IMX(I%) = REX(2*I%+1) 3090 NEXT I% 3100 ' 3110 N% = N%/2 'Calculate N%/2 point FFT 3120 GOSUB 1000 '(GOSUB 1000 is the FFT in Table 12-3) 3130 N% = N%*2 3140 ' 3150 NM1% = N%-1 'Even/odd frequency domain decomposition 3160 ND2% = N%/2 3170 N4% = N%/4-1 3180 FOR I% = 1 TO N4% 3190 IM% = ND2%-I% 3200 IP2% = I%+ND2% 3210 IPM% = IM%+ND2% 3220 REX(IP2%) = (IMX(I%) + IMX(IM%))/2 3230 REX(IPM%) = REX(IP2%) 3240 IMX(IP2%) = -(REX(I%) - REX(IM%))/2 3250 IMX(IPM%) = -IMX(IP2%) 3260 REX(I%) = (REX(I%) + REX(IM%))/2 3270 REX(IM%) = REX(I%) 3280 IMX(I%) = (IMX(I%) - IMX(IM%))/2 3290 IMX(IM%) = -IMX(I%) 3300 NEXT I% 3310 REX(N%*3/4) = IMX(N%/4) 3320 REX(ND2%) = IMX(0) 3330 IMX(N%*3/4) = 0 3340 IMX(ND2%) = 0 3350 IMX(N%/4) = 0 3360 IMX(0) = 0 3370 ' 3380 PI = 3.14159265 'Complete the last FFT stage 3390 L% = CINT(LOG(N%)/LOG(2)) 3400 LE% = CINT(2^L%) 3410 LE2% = LE%/2 3420 UR = 1 3430 UI = 0 3440 SR = COS(PI/LE2%) 3450 SI = -SIN(PI/LE2%) 3460 FOR J% = 1 TO LE2% 3470 JM1% = J%-1 3480 FOR I% = JM1% TO NM1% STEP LE% 3490 IP% = I%+LE2% 3500 TR = REX[IP%]*UR - IMX[IP%]*UI 3510 TI = REX[IP%]*UI + IMX[IP%]*UR 3520 REX[IP%] = REX[I%]-TR 3530 IMX[IP%] = IMX[I%]-TI 3540 REX[I%] = REX[I%]+TR 3550 IMX[I%] = IMX[I%]+TI 3560 NEXT I% 3570 TR = UR 3580 UR = TR*SR - UI*SI 3590 UI = TR*SI + UI*SR 3600 NEXT J% 3610 RETURN TABLE 12-7 CHAPTER 15 EQUATION 15-1 Equation of the moving average filter. In this equation, x[ ] is the input signal, y[ ] is the output signal, and M is the number of points used in the moving average. This equation only uses points on one side of the output sample being calculated. y[i ] ’ 1 M j M&1 j’ 0 x [ i %j ] y [80] ’ x [80] % x [81] % x [82] % x [83] % x [84] 5 Moving Average Filters The moving average is the most common filter in DSP, mainly because it is the easiest digital filter to understand and use. In spite of its simplicity, the moving average filter is optimal for a common task: reducing random noise while retaining a sharp step response. This makes it the premier filter for time domain encoded signals. However, the moving average is the worst filter for frequency domain encoded signals, with little ability to separate one band of frequencies from another. Relatives of the moving average filter include the Gaussian, Blackman, and multiplepass moving average. These have slightly better performance in the frequency domain, at the expense of increased computation time. Implementation by Convolution As the name implies, the moving average filter operates by averaging a number of points from the input signal to produce each point in the output signal. In equation form, this is written: Where x [ ] is the input signal, y [ ] is the output signal, and M is the number of points in the average. For example, in a 5 point moving average filter, point 80 in the output signal is given by: 278 The Scientist and Engineer's Guide to Digital Signal Processing y [80] ’ x [78] % x [79] % x [80] % x [81] % x [82] 5 100 'MOVING AVERAGE FILTER 110 'This program filters 5000 samples with a 101 point moving 120 'average filter, resulting in 4900 samples of filtered data. 130 ' 140 DIM X[4999] 'X[ ] holds the input signal 150 DIM Y[4999] 'Y[ ] holds the output signal 160 ' 170 GOSUB XXXX 'Mythical subroutine to load X[ ] 180 ' 190 FOR I% = 50 TO 4949 'Loop for each point in the output signal 200 Y[I%] = 0 'Zero, so it can be used as an accumulator 210 FOR J% = -50 TO 50 'Calculate the summation 220 Y[I%] = Y[I%] + X(I%+J%] 230 NEXT J% 240 Y[I%] = Y[I%]/101 'Complete the average by dividing 250 NEXT I% 260 ' 270 END TABLE 15-1 As an alternative, the group of points from the input signal can be chosen symmetrically around the output point: This corresponds to changing the summation in Eq. 15-1 from: j ’ 0 to M&1 , to: j ’ &(M&1) /2 to (M&1) /2 . For instance, in an 11 point moving average filter, the index, j, can run from 0 to 11 (one side averaging) or -5 to 5 (symmetrical averaging). Symmetrical averaging requires that M be an odd number. Programming is slightly easier with the points on only one side; however, this produces a relative shift between the input and output signals. You should recognize that the moving average filter is a convolution using a very simple filter kernel. For example, a 5 point filter has the filter kernel: þ 0, 0, 1/5, 1/5, 1/5, 1/5, 1/5, 0, 0 þ . That is, the moving average filter is a convolution of the input signal with a rectangular pulse having an area of one. Table 15-1 shows a program to implement the moving average filter. Noise Reduction vs. Step Response Many scientists and engineers feel guilty about using the moving average filter. Because it is so very simple, the moving average filter is often the first thing tried when faced with a problem. Even if the problem is completely solved, there is still the feeling that something more should be done. This situation is truly ironic. Not only is the moving average filter very good for many applications, it is optimal for a common problem, reducing random white noise while keeping the sharpest step response. Chapter 15- Moving Average Filters 279 Sample number 0 100 200 300 400 500 -1 0 1 2 a. Original signal Sample number 0 100 200 300 400 500 -1 0 1 2 b. 11 point moving average FIGURE 15-1 Example of a moving average filter. In (a), a rectangular pulse is buried in random noise. In (b) and (c), this signal is filtered with 11 and 51 point moving average filters, respectively. As the number of points in the filter increases, the noise becomes lower; however, the edges becoming less sharp. The moving average filter is the optimal solution for this problem, providing the lowest noise possible for a given edge sharpness. Sample number 0 100 200 300 400 500 -1 0 1 2 c. 51 point moving average Amplitude Amplitude Amplitude Figure 15-1 shows an example of how this works. The signal in (a) is a pulse buried in random noise. In (b) and (c), the smoothing action of the moving average filter decreases the amplitude of the random noise (good), but also reduces the sharpness of the edges (bad). Of all the possible linear filters that could be used, the moving average produces the lowest noise for a given edge sharpness. The amount of noise reduction is equal to the square-root of the number of points in the average. For example, a 100 point moving average filter reduces the noise by a factor of 10. To understand why the moving average if the best solution, imagine we want to design a filter with a fixed edge sharpness. For example, let's assume we fix the edge sharpness by specifying that there are eleven points in the rise of the step response. This requires that the filter kernel have eleven points. The optimization question is: how do we choose the eleven values in the filter kernel to minimize the noise on the output signal? Since the noise we are trying to reduce is random, none of the input points is special; each is just as noisy as its neighbor. Therefore, it is useless to give preferential treatment to any one of the input points by assigning it a larger coefficient in the filter kernel. The lowest noise is obtained when all the input samples are treated equally, i.e., the moving average filter. (Later in this chapter we show that other filters are essentially as good. The point is, no filter is better than the simple moving average). 280 The Scientist and Engineer's Guide to Digital Signal Processing EQUATION 15-2 Frequency response of an M point moving average filter. The frequency, f, runs between 0 and 0.5. For f ’ 0, use: H[ f ] ’ 1 H [ f ] ’ sin(Bf M ) M sin(Bf ) Frequency 0 0.1 0.2 0.3 0.4 0.5 0.0 0.2 0.4 0.6 0.8 1.0 1.2 3 point 11 point 31 point FIGURE 15-2 Frequency response of the moving average filter. The moving average is a very poor low-pass filter, due to its slow roll-off and poor stopband attenuation. These curves are generated by Eq. 15-2. Amplitude Frequency Response Figure 15-2 shows the frequency response of the moving average filter. It is mathematically described by the Fourier transform of the rectangular pulse, as discussed in Chapter 11: The roll-off is very slow and the stopband attenuation is ghastly. Clearly, the moving average filter cannot separate one band of frequencies from another. Remember, good performance in the time domain results in poor performance in the frequency domain, and vice versa. In short, the moving average is an exceptionally good smoothing filter (the action in the time domain), but an exceptionally bad low-pass filter (the action in the frequency domain). Relatives of the Moving Average Filter In a perfect world, filter designers would only have to deal with time domain or frequency domain encoded information, but never a mixture of the two in the same signal. Unfortunately, there are some applications where both domains are simultaneously important. For instance, television signals fall into this nasty category. Video information is encoded in the time domain, that is, the shape of the waveform corresponds to the patterns of brightness in the image. However, during transmission the video signal is treated according to its frequency composition, such as its total bandwidth, how the carrier waves for sound & color are added, elimination & restoration of the DC component, etc. As another example, electromagnetic interference is best understood in the frequency domain, even if Chapter 15- Moving Average Filters 281 Sample number 0 6 12 18 24 0.0 0.1 0.2 2 pass 4 pass 1 pass a. Filter kernel Sample number 0 6 12 18 24 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1 pass 4 pass 2 pass b. Step response Frequency 0 0.1 0.2 0.3 0.4 0.5 -120 -100 -80 -60 -40 -20 0 20 40 1 pass 2 pass 4 pass d. Frequency response (dB) FIGURE 15-3 Characteristics of multiple-pass moving average filters. Figure (a) shows the filter kernels resulting from passing a seven point moving average filter over the data once, twice and four times. Figure (b) shows the corresponding step responses, while (c) and (d) show the corresponding frequency responses. FFT Integrate 20 Log( ) Amplitude Amplitude Frequency 0 0.1 0.2 0.3 0.4 0.5 0.00 0.25 0.50 0.75 1.00 1.25 1 pass 2 pass 4 pass c. Frequency response Amplitude (dB) Amplitude the signal's information is encoded in the time domain. For instance, the temperature monitor in a scientific experiment might be contaminated with 60 hertz from the power lines, 30 kHz from a switching power supply, or 1320 kHz from a local AM radio station. Relatives of the moving average filter have better frequency domain performance, and can be useful in these mixed domain applications. Multiple-pass moving average filters involve passing the input signal through a moving average filter two or more times. Figure 15-3a shows the overall filter kernel resulting from one, two and four passes. Two passes are equivalent to using a triangular filter kernel (a rectangular filter kernel convolved with itself). After four or more passes, the equivalent filter kernel looks like a Gaussian (recall the Central Limit Theorem). As shown in (b), multiple passes produce an "s" shaped step response, as compared to the straight line of the single pass. The frequency responses in (c) and (d) are given by Eq. 15-2 multiplied by itself for each pass. That is, each time domain convolution results in a multiplication of the frequency spectra. 282 The Scientist and Engineer's Guide to Digital Signal Processing Figure 15-4 shows the frequency response of two other relatives of the moving average filter. When a pure Gaussian is used as a filter kernel, the frequency response is also a Gaussian, as discussed in Chapter 11. The Gaussian is important because it is the impulse response of many natural and manmade systems. For example, a brief pulse of light entering a long fiber optic transmission line will exit as a Gaussian pulse, due to the different paths taken by the photons within the fiber. The Gaussian filter kernel is also used extensively in image processing because it has unique properties that allow fast two-dimensional convolutions (see Chapter 24). The second frequency response in Fig. 15-4 corresponds to using a Blackman window as a filter kernel. (The term window has no meaning here; it is simply part of the accepted name of this curve). The exact shape of the Blackman window is given in Chapter 16 (Eq. 16-2, Fig. 16-2); however, it looks much like a Gaussian. How are these relatives of the moving average filter better than the moving average filter itself? Three ways: First, and most important, these filters have better stopband attenuation than the moving average filter. Second, the filter kernels taper to a smaller amplitude near the ends. Recall that each point in the output signal is a weighted sum of a group of samples from the input. If the filter kernel tapers, samples in the input signal that are farther away are given less weight than those close by. Third, the step responses are smooth curves, rather than the abrupt straight line of the moving average. These last two are usually of limited benefit, although you might find applications where they are genuine advantages. The moving average filter and its relatives are all about the same at reducing random noise while maintaining a sharp step response. The ambiguity lies in how the risetime of the step response is measured. If the risetime is measured from 0% to 100% of the step, the moving average filter is the best you can do, as previously shown. In comparison, measuring the risetime from 10% to 90% makes the Blackman window better than the moving average filter. The point is, this is just theoretical squabbling; consider these filters equal in this parameter. The biggest difference in these filters is execution speed. Using a recursive algorithm (described next), the moving average filter will run like lightning in your computer. In fact, it is the fastest digital filter available. Multiple passes of the moving average will be correspondingly slower, but still very quick. In comparison, the Gaussian and Blackman filters are excruciatingly slow, because they must use convolution. Think a factor of ten times the number of points in the filter kernel (based on multiplication being about 10 times slower than addition). For example, expect a 100 point Gaussian to be 1000 times slower than a moving average using recursion. Recursive Implementation A tremendous advantage of the moving average filter is that it can be implemented with an algorithm that is very fast. To understand this Chapter 15- Moving Average Filters 283 FIGURE 15-4 Frequency response of the Blackman window and Gaussian filter kernels. Both these filters provide better stopband attenuation than the moving average filter. This has no advantage in removing random noise from time domain encoded signals, but it can be useful in mixed domain problems. The disadvantage of these filters is that they must use convolution, a terribly slow algorithm. Frequency 0 0.1 0.2 0.3 0.4 0.5 -140 -120 -100 -80 -60 -40 -20 0 20 Gaussian Blackman Amplitude (dB) y [50] ’ x [47] % x [48] % x [49] % x [50] % x [51] % x [52] % x [53] y [51] ’ x [48] % x [49] % x [50] % x [51] % x [52] % x [53] % x [54] y [51] ’ y [50] % x [54] & x [47] EQUATION 15-3 Recursive implementation of the moving average filter. In this equation, x[ ] is the input signal, y[ ] is the output signal, M is the number of points in the moving average (an odd number). Before this equation can be used, the first point in the signal must be calculated using a standard summation. y [i ] ’ y [i &1] % x [i %p] & x [i &q] q ’ p % 1 where: p ’ (M&1) /2 algorithm, imagine passing an input signal, x [ ], through a seven point moving average filter to form an output signal, y [ ]. Now look at how two adjacent output points, y [50] and y [51], are calculated: These are nearly the same calculation; points x [48] through x [53] must be added for y [50], and again for y [51]. If y [50] has already been calculated, the most efficient way to calculate y [51] is: Once y [51] has been found using y [50], then y [52] can be calculated from sample y [51], and so on. After the first point is calculated in y [ ], all of the other points can be found with only a single addition and subtraction per point. This can be expressed in the equation: Notice that this equation use two sources of data to calculate each point in the output: points from the input and previously calculated points from the output. This is called a recursive equation, meaning that the result of one calculation 284 The Scientist and Engineer's Guide to Digital Signal Processing 100 'MOVING AVERAGE FILTER IMPLEMENTED BY RECURSION 110 'This program filters 5000 samples with a 101 point moving 120 'average filter, resulting in 4900 samples of filtered data. 130 'A double precision accumulator is used to prevent round-off drift. 140 ' 150 DIM X[4999] 'X[ ] holds the input signal 160 DIM Y[4999] 'Y[ ] holds the output signal 170 DEFDBL ACC 'Define the variable ACC to be double precision 180 ' 190 GOSUB XXXX 'Mythical subroutine to load X[ ] 200 ' 210 ACC = 0 'Find Y[50] by averaging points X[0] to X[100] 220 FOR I% = 0 TO 100 230 ACC = ACC + X[I%] 240 NEXT I% 250 Y[[50] = ACC/101 260 ' 'Recursive moving average filter (Eq. 15-3) 270 FOR I% = 51 TO 4949 280 ACC = ACC + X[I%+50] - X[I%-51] 290 Y[I%] = ACC 300 NEXT I% 310 ' 320 END TABLE 15-2 CHAPTER 6 Convolution Convolution is a mathematical way of combining two signals to form a third signal. It is the single most important technique in Digital Signal Processing. Using the strategy of impulse decomposition, systems are described by a signal called the impulse response. Convolution is important because it relates the three signals of interest: the input signal, the output signal, and the impulse response. This chapter presents convolution from two different viewpoints, called the input side algorithm and the output side algorithm. Convolution provides the mathematical framework for DSP; there is nothing more important in this book. The Delta Function and Impulse Response The previous chapter describes how a signal can be decomposed into a group of components called impulses. An impulse is a signal composed of all zeros, except a single nonzero point. In effect, impulse decomposition provides a way to analyze signals one sample at a time. The previous chapter also presented the fundamental concept of DSP: the input signal is decomposed into simple additive components, each of these components is passed through a linear system, and the resulting output components are synthesized (added). The signal resulting from this divide-and-conquer procedure is identical to that obtained by directly passing the original signal through the system. While many different decompositions are possible, two form the backbone of signal processing: impulse decomposition and Fourier decomposition. When impulse decomposition is used, the procedure can be described by a mathematical operation called convolution. In this chapter (and most of the following ones) we will only be dealing with discrete signals. Convolution also applies to continuous signals, but the mathematics is more complicated. We will look at how continious signals are processed in Chapter 13. Figure 6-1 defines two important terms used in DSP. The first is the delta function, symbolized by the Greek letter delta, *[n]. The delta function is a normalized impulse, that is, sample number zero has a value of one, while 108 The Scientist and Engineer's Guide to Digital Signal Processing all other samples have a value of zero. For this reason, the delta function is frequently called the unit impulse. The second term defined in Fig. 6-1 is the impulse response. As the name suggests, the impulse response is the signal that exits a system when a delta function (unit impulse) is the input. If two systems are different in any way, they will have different impulse responses. Just as the input and output signals are often called x[n] and y[n] , the impulse response is usually given the symbol, h[n]. Of course, this can be changed if a more descriptive name is available, for instance, f [n] might be used to identify the impulse response of a filter. Any impulse can be represented as a shifted and scaled delta function. Consider a signal, a[n] , composed of all zeros except sample number 8, which has a value of -3. This is the same as a delta function shifted to the right by 8 samples, and multiplied by -3. In equation form: a[n] ’ &3*[n&8]. Make sure you understand this notation, it is used in nearly all DSP equations. If the input to a system is an impulse, such as &3*[n&8] , what is the system's output? This is where the properties of homogeneity and shift invariance are used. Scaling and shifting the input results in an identical scaling and shifting of the output. If *[n] results in h[n] , it follows that &3*[n&8] results in &3h[n&8] . In words, the output is a version of the impulse response that has been shifted and scaled by the same amount as the delta function on the input. If you know a system's impulse response, you immediately know how it will react to any impulse. Convolution Let's summarize this way of understanding how a system changes an input signal into an output signal. First, the input signal can be decomposed into a set of impulses, each of which can be viewed as a scaled and shifted delta function. Second, the output resulting from each impulse is a scaled and shifted version of the impulse response. Third, the overall output signal can be found by adding these scaled and shifted impulse responses. In other words, if we know a system's impulse response, then we can calculate what the output will be for any possible input signal. This means we know everything about the system. There is nothing more that can be learned about a linear system's characteristics. (However, in later chapters we will show that this information can be represented in different forms). The impulse response goes by a different name in some applications. If the system being considered is a filter, the impulse response is called the filter kernel, the convolution kernel, or simply, the kernel. In image processing, the impulse response is called the point spread function. While these terms are used in slightly different ways, they all mean the same thing, the signal produced by a system when the input is a delta function. Chapter 6- Convolution 109 System -2 -1 0 1 2 3 4 5 6 -1 0 1 2 -2 -1 0 1 2 3 4 5 6 -1 0 1 2 *[n] h[n] Delta Impulse Response Linear Function FIGURE 6-1 Definition of delta function and impulse response. The delta function is a normalized impulse. All of its samples have a value of zero, except for sample number zero, which has a value of one. The Greek letter delta, *[n] , is used to identify the delta function. The impulse response of a linear system, usually denoted by h[n] , is the output of the system when the input is a delta function. x[n] h[n] = y[n] x[n] y[n] Linear System h[n] FIGURE 6-2 How convolution is used in DSP. The output signal from a linear system is equal to the input signal convolved with the system's impulse response. Convolution is denoted by a star when writing equations. Convolution is a formal mathematical operation, just as multiplication, addition, and integration. Addition takes two numbers and produces a third number, while convolution takes two signals and produces a third signal. Convolution is used in the mathematics of many fields, such as probability and statistics. In linear systems, convolution is used to describe the relationship between three signals of interest: the input signal, the impulse response, and the output signal. Figure 6-2 shows the notation when convolution is used with linear systems. An input signal, x[n] , enters a linear system with an impulse response, h[n] , resulting in an output signal, y[n] . In equation form: x[n] t h[n] ’ y[n] . Expressed in words, the input signal convolved with the impulse response is equal to the output signal. Just as addition is represented by the plus, +, and multiplication by the cross, ×, convolution is represented by the star, t. It is unfortunate that most programming languages also use the star to indicate multiplication. A star in a computer program means multiplication, while a star in an equation means convolution. 110 The Scientist and Engineer's Guide to Digital Signal Processing Sample number 0 10 20 30 40 50 60 70 80 90 100 110 -2 -1 0 1 2 3 4 S 0 10 20 30 -0.25 0.00 0.25 0.50 0.75 1.00 1.25 S 0 10 20 30 -0.02 0.00 0.02 0.04 0.06 0.08 a. Low-pass Filter b. High-pass Filter Sample number 0 10 20 30 40 50 60 70 80 -2 -1 0 1 2 3 4 Sample number 0 10 20 30 40 50 60 70 80 90 100 110 -2 -1 0 1 2 3 4 Sample number 0 10 20 30 40 50 60 70 80 -2 -1 0 1 2 3 4 Sample number Sample number Input Signal Impulse Response Output Signal Amplitude Amplitude Amplitude Amplitude Amplitude Amplitude FIGURE 6-3 Examples of low-pass and high-pass filtering using convolution. In this example, the input signal is a few cycles of a sine wave plus a slowly rising ramp. These two components are separated by using properly selected impulse responses. Figure 6-3 shows convolution being used for low-pass and high-pass filtering. The example input signal is the sum of two components: three cycles of a sine wave (representing a high frequency), plus a slowly rising ramp (composed of low frequencies). In (a), the impulse response for the low-pass filter is a smooth arch, resulting in only the slowly changing ramp waveform being passed to the output. Similarly, the high-pass filter, (b), allows only the more rapidly changing sinusoid to pass. Figure 6-4 illustrates two additional examples of how convolution is used to process signals. The inverting attenuator, (a), flips the signal top-for-bottom, and reduces its amplitude. The discrete derivative (also called the first difference), shown in (b), results in an output signal related to the slope of the input signal. Notice the lengths of the signals in Figs. 6-3 and 6-4. The input signals are 81 samples long, while each impulse response is composed of 31 samples. In most DSP applications, the input signal is hundreds, thousands, or even millions of samples in length. The impulse response is usually much shorter, say, a few points to a few hundred points. The mathematics behind convolution doesn't restrict how long these signals are. It does, however, specify the length of the output signal. The length of the output signal is Chapter 6- Convolution 111 S 0 10 20 30 -2.00 -1.00 0.00 1.00 2.00 S 0 10 20 30 -2.00 -1.00 0.00 1.00 2.00 a. Inverting Attenuator b. Discrete Derivative Sample number 0 10 20 30 40 50 60 70 80 90 100 110 -2 -1 0 1 2 3 4 Sample number 0 10 20 30 40 50 60 70 80 90 100 110 -2 -1 0 1 2 3 4 Sample number 0 10 20 30 40 50 60 70 80 -2 -1 0 1 2 3 4 Sample number 0 10 20 30 40 50 60 70 80 -2 -1 0 1 2 3 4 Input Signal Impulse Response Output Signal Sample number Sample number Amplitude Amplitude Amplitude Amplitude Amplitude Amplitude FIGURE 6-4 Examples of signals being processed using convolution. Many signal processing tasks use very simple impulse responses. As shown in these examples, dramatic changes can be achieved with only a few nonzero points. equal to the length of the input signal, plus the length of the impulse response, minus one. For the signals in Figs. 6-3 and 6-4, each output signal is: 81% 31& 1 ’ 111 samples long. The input signal runs from sample 0 to 80, the impulse response from sample 0 to 30, and the output signal from sample 0 to 110. Now we come to the detailed mathematics of convolution. As used in Digital Signal Processing, convolution can be understood in two separate ways. The first looks at convolution from the viewpoint of the input signal. This involves analyzing how each sample in the input signal contributes to many points in the output signal. The second way looks at convolution from the viewpoint of the output signal. This examines how each sample in the output signal has received information from many points in the input signal. Keep in mind that these two perspectives are different ways of thinking about the same mathematical operation. The first viewpoint is important because it provides a conceptual understanding of how convolution pertains to DSP. The second viewpoint describes the mathematics of convolution. This typifies one of the most difficult tasks you will encounter in DSP: making your conceptual understanding fit with the jumble of mathematics used to communicate the ideas. 112 The Scientist and Engineer's Guide to Digital Signal Processing 0 1 2 3 4 5 6 7 8 9 10 11 -3 -2 -1 0 1 2 3 0 1 2 3 4 5 6 7 8 -3 -2 -1 0 1 2 3 0 1 2 3 -3 -2 -1 0 1 2 3 x[n] h[n] y[n] FIGURE 6-5 Example convolution problem. A nine point input signal, convolved with a four point impulse response, results in a twelve point output signal. Each point in the input signal contributes a scaled and shifted impulse response to the output signal. These nine scaled and shifted impulse responses are shown in Fig. 6-6. Now examine sample x[8] , the last point in the input signal. This sample is at index number eight, and has a value of -0.5. As shown in the lower-right graph of Fig. 6-6, x[8] results in an impulse response that has been shifted to the right by eight points and multiplied by -0.5. Place holding zeros have been added at points 0-7. Lastly, examine the effect of points x[0] and x[7] . Both these samples have a value of zero, and therefore produce output components consisting of all zeros. The Input Side Algorithm Figure 6-5 shows a simple convolution problem: a 9 point input signal, x[n] , is passed through a system with a 4 point impulse response, h[n] , resulting in a 9% 4& 1 ’ 12 point output signal, y[n] . In mathematical terms, x[n] is convolved with h[n] to produce y[n] . This first viewpoint of convolution is based on the fundamental concept of DSP: decompose the input, pass the components through the system, and synthesize the output. In this example, each of the nine samples in the input signal will contribute a scaled and shifted version of the impulse response to the output signal. These nine signals are shown in Fig. 6-6. Adding these nine signals produces the output signal, y[n] . Let's look at several of these nine signals in detail. We will start with sample number four in the input signal, i.e., x[4] . This sample is at index number four, and has a value of 1.4. When the signal is decomposed, this turns into an impulse represented as: 1.4*[n&4]. After passing through the system, the resulting output component will be: 1.4 h[n&4]. This signal is shown in the center box of the nine signals in Fig. 6-6. Notice that this is the impulse response, h[n] , multiplied by 1.4, and shifted four samples to the right. Zeros have been added at samples 0-3 and at samples 8-11 to serve as place holders. To make this more clear, Fig. 6-6 uses squares to represent the data points that come from the shifted and scaled impulse response, and diamonds for the added zeros. Chapter 6- Convolution 113 FIGURE 6-6 Output signal components for the convolution in Fig. 6-5. In these signals, each point that results from a scaled and shifted impulse response is represented by a square marker. The remaining data points, represented by diamonds, are zeros that have been added as place holders. 0 1 2 3 4 5 6 7 8 9 10 11 -3 -2 -1 0 1 2 3 contribution from x[ ] h[n- ] 0 1 2 3 4 5 6 7 8 9 10 11 -3 -2 -1 0 1 2 3 contribution from x[ ] h[n- ] 0 1 2 3 4 5 6 7 8 9 10 11 -3 -2 -1 0 1 2 3 contribution from x[ ] h[n- ] 0 1 2 3 4 5 6 7 8 9 10 11 -3 -2 -1 0 1 2 3 contribution from x[ ] h[n- ] 0 1 2 3 4 5 6 7 8 9 10 11 -3 -2 -1 0 1 2 3 contribution from x[ ] h[n- ] 0 1 2 3 4 5 6 7 8 9 10 11 -3 -2 -1 0 1 2 3 contribution from x[ ] h[n- ] 0 1 2 3 4 5 6 7 8 9 10 11 -3 -2 -1 0 1 2 3 contribution from x[ ] h[n- ] 0 1 2 3 4 5 6 7 8 9 10 11 -3 -2 -1 0 1 2 3 contribution from x[ ] h[n- ] 0 1 2 3 4 5 6 7 8 9 10 11 -3 -2 -1 0 1 2 3 contribution from x[ ] h[n- ] 0 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 In this example, x[n] is a nine point signal and h[n] is a four point signal. In our next example, shown in Fig. 6-7, we will reverse the situation by making x[n] a four point signal, and h[n] a nine point signal. The same two waveforms are used, they are just swapped. As shown by the output signal components, the four samples in x[n] result in four shifted and scaled versions of the nine point impulse response. Just as before, leading and trailing zeros are added as place holders. But wait just one moment! The output signal in Fig. 6-7 is identical to the output signal in Fig. 6-5. This isn't a mistake, but an important property. Convolution is commutative: a[n]tb[n] ’ b[n]ta[n] . The mathematics does not care which is the input signal and which is the impulse response, only that two signals are convolved with each other. Although the mathematics may allow it, exchanging the two signals has no physical meaning in system theory. The input signal and impulse response are two totally different things and exchanging them doesn't make sense. What the commutative property provides is a mathematical tool for manipulating equations to achieve various results. 114 The Scientist and Engineer's Guide to Digital Signal Processing TABLE 6-1 100 'CONVOLUTION USING THE INPUT SIDE ALGORITHM 110 ' 120 DIM X[80] 'The input signal, 81 points 130 DIM H[30] 'The impulse response, 31 points 140 DIM Y[110] 'The output signal, 111 points 150 ' 160 GOSUB XXXX 'Mythical subroutine to load X[ ] and H[ ] 170 ' 180 FOR I% = 0 TO 110 'Zero the output array 190 Y(I%) = 0 200 NEXT I% 210 ' 220 FOR I% = 0 TO 80 'Loop for each point in X[ ] 230 FOR J% = 0 TO 30 'Loop for each point in H[ ] 240 Y[I%+J%] = Y[I%+J%] + X[I%]tH[J%] 250 NEXT J% 260 NEXT I% '(remember, t is multiplication in programs!) 270 ' 280 GOSUB XXXX 'Mythical subroutine to store Y[ ] 290 ' 300 END A program for calculating convolutions using the input side algorithm is shown in Table 6-1. Remember, the programs in this book are meant to convey algorithms in the simplest form, even at the expense of good programming style. For instance, all of the input and output is handled in mythical subroutines (lines 160 and 280), meaning we do not define how these operations are conducted. Do not skip over these programs; they are a key part of the material and you need to understand them in detail. The program convolves an 81 point input signal, held in array X[ ], with a 31 point impulse response, held in array H[ ], resulting in a 111 point output signal, held in array Y[ ]. These are the same lengths shown in Figs. 6-3 and 6-4. Notice that the names of these arrays use upper case letters. This is a violation of the naming conventions previously discussed, because upper case letters are reserved for frequency domain signals. Unfortunately, the simple BASIC used in this book does not allow lower case variable names. Also notice that line 240 uses a star for multiplication. Remember, a star in a program means multiplication, while a star in an equation means convolution. A star in text (such as documentation or program comments) can mean either. The mythical subroutine in line 160 places the input signal into X[ ] and the impulse response into H[ ]. Lines 180-200 set all of the values in Y[ ] to zero. This is necessary because Y[ ] is used as an accumulator to sum the output components as they are calculated. Lines 220 to 260 are the heart of the program. The FOR statement in line 220 controls a loop that steps through each point in the input signal, X[ ]. For each sample in the input signal, an inner loop (lines 230-250) calculates a scaled and shifted version of the impulse response, and adds it to the array accumulating the output signal, Y[ ]. This nested loop structure (one loop within another loop) is a key characteristic of convolution programs; become familiar with it. Chapter 6- Convolution 115 FIGURE 6-7 A second example of convolution. The waveforms for the input signal and impulse response are exchanged from the example of Fig. 6-5. Since convolution is commutative, the output signals for the two examples are identical. 0 1 2 3 4 5 6 7 8 9 10 11 -3 -2 -1 0 1 2 3 0 1 2 3 4 5 6 7 8 -3 -2 -1 0 1 2 3 0 1 2 3 -3 -2 -1 0 1 2 3 x[n] h[n] y[n] 0 1 2 3 4 5 6 7 8 9 10 11 -3 -2 -1 0 1 2 3 contribution from x[ ] h[n- ] 0 1 2 3 4 5 6 7 8 9 10 11 -3 -2 -1 0 1 2 3 contribution from x[ ] h[n- ] 0 1 2 3 4 5 6 7 8 9 10 11 -3 -2 -1 0 1 2 3 contribution from x[ ] h[n- ] 0 1 2 3 4 5 6 7 8 9 10 11 -3 -2 -1 0 1 2 3 contribution from x[ ] h[n- ] 0 0 1 1 2 2 3 3 Output signal components Keeping the indexing straight in line 240 can drive you crazy! Let's say we are halfway through the execution of this program, so that we have just begun action on sample X[40], i.e., I% = 40. The inner loop runs through each point in the impulse response doing three things. First, the impulse response is scaled by multiplying it by the value of the input sample. If this were the only action taken by the inner loop, line 240 could be written, Y[J%] = X[40]tH[J%]. Second, the scaled impulse is shifted 40 samples to the right by adding this number to the index used in the output signal. This second action would change line 240 to: Y[40+J%] = X[40]tH[J%]. Third, Y[ ] must accumulate (synthesize) all the signals resulting from each sample in the input signal. Therefore, the new information must be added to the information that is already in the array. This results in the final command: Y[40+J%] = Y[40+J%] + X[40]tH[J%]. Study this carefully; it is very confusing, but very important. 116 The Scientist and Engineer's Guide to Digital Signal Processing The Output Side Algorithm The first viewpoint of convolution analyzes how each sample in the input signal affects many samples in the output signal. In this second viewpoint, we reverse this by looking at individual samples in the output signal, and finding the contributing points from the input. This is important from both mathematical and practical standpoints. Suppose that we are given some input signal and impulse response, and want to find the convolution of the two. The most straightforward method would be to write a program that loops through the output signal, calculating one sample on each loop cycle. Likewise, equations are written in the form: y[n] ’ some combination of other variables. That is, sample n in the output signal is equal to some combination of the many values in the input signal and impulse response. This requires a knowledge of how each sample in the output signal can be calculated independently of all other samples in the output signal. The output side algorithm provides this information. Let's look at an example of how a single point in the output signal is influenced by several points from the input. The example point we will use is y[6] in Fig. 6-5. This point is equal to the sum of all the sixth points in the nine output components, shown in Fig. 6-6. Now, look closely at these nine output components and identify which can affect y[6] . That is, find which of these nine signals contains a nonzero sample at the sixth position. Five of the output components only have added zeros (the diamond markers) at the sixth sample, and can therefore be ignored. Only four of the output components are capable of having a nonzero value in the sixth position. These are the output components generated from the input samples: x[3], x[4], x[5], and x[6] . By adding the sixth sample from each of these output components, y[6] is determined as: y[6] ’ x[3]h[3] % x[4]h[2] % x[5]h[1] % x[6]h[0] . That is, four samples from the input signal are multiplied by the four samples in the impulse response, and the products added. Figure 6-8 illustrates the output side algorithm as a convolution machine, a flow diagram of how convolution occurs. Think of the input signal, x[n] , and the output signal, y[n] , as fixed on the page. The convolution machine, everything inside the dashed box, is free to move left and right as needed. The convolution machine is positioned so that its output is aligned with the output sample being calculated. Four samples from the input signal fall into the inputs of the convolution machine. These values are multiplied by the indicated samples in the impulse response, and the products are added. This produces the value for the output signal, which drops into its proper place. For example, y[6] i s s h own b e i n g c a l c u l a t e d f r om t h e f o u r i n p u t s amp l e s : x[3], x[4], x[5], and x[6] . To calculate y[7] , the convolution machine moves one sample to the right. This results in another four samples entering the machine, x[4] through x[7] , and the value for y[7] dropping into the proper place. This process is repeated for all points in the output signal needing to be calculated. Chapter 6- Convolution 117 0 1 2 3 4 5 6 7 8 -3 -2 -1 0 1 2 3 -3 -2 -1 0 3.0 2.0 1.0 0.0 1.0 2.0 3.0 x[n] y[n] h[n] 0 -1 -2 -3 2 3 1 (flipped) FIGURE 6-8 The convolution machine. This is a flow diagram showing how each sample in the output signal is influenced by the input signal and impulse response. See the text for details. 0 1 2 3 4 5 6 7 8 9 10 11 -3 -2 -1 0 1 2 3 The arrangement of the impulse response inside the convolution machine is very important. The impulse response is flipped left-for-right. This places sample number zero on the right, and increasingly positive sample numbers running to the left. Compare this to the normal impulse response in Fig. 6-5 to understand the geometry of this flip. Why is this flip needed? It simply falls out of the mathematics. The impulse response describes how each point in the input signal affects the output signal. This results in each point in the output signal being affected by points in the input signal weighted by a flipped impulse response. 118 The Scientist and Engineer's Guide to Digital Signal Processing FIGURE 6-9 The convolution machine in action. Figures (a) through (d) show the convolution machine set to calculate four different output signal samples, y[0], y[3], y[8], and y[11]. 0 1 2 3 4 5 6 7 8 -3 -2 -1 0 1 2 3 -3 -2 -1 0 3.0 2.0 1.0 0.0 1.0 2.0 3.0 x[n] y[n] h[n] 0 -1 -2 -3 2 3 1 (flipped) a. Set to calculate y[0] 0 1 2 3 4 5 6 7 8 -3 -2 -1 0 1 2 3 -3 -2 -1 0 3.0 2.0 1.0 0.0 1.0 2.0 3.0 x[n] y[n] h[n] 0 -1 -2 -3 2 3 1 (flipped) b. Set to calculate y[3] 0 1 2 3 4 5 6 7 8 9 10 11 -3 -2 -1 0 1 2 3 0 1 2 3 4 5 6 7 8 9 10 11 -3 -2 -1 0 1 2 3 Figure 6-9 shows the convolution machine being used to calculate several samples in the output signal. This diagram also illustrates a real nuisance in convolution. In (a), the convolution machine is located fully to the left with its output aimed at y[0] . In this position, it is trying to receive input from samples: x[&3], x[&2], x[&1], and x[0] . The problem is, three of these samples: x[&3], x[&2], and x[&1] , do not exist! This same dilemma arises in (d), where the convolution machine tries to accept samples to the right of the defined input signal, points x[9], x[10], and x[11] . One way to handle this problem is by inventing the nonexistent samples. This involves adding samples to the ends of the input signal, with each of the added samples having a value of zero. This is called padding the signal with zeros. Instead of trying to access a nonexistent value, the convolution machine receives a sample that has a value of zero. Since this zero is eliminated during the multiplication, the result is mathematically the same as ignoring the nonexistent inputs. Chapter 6- Convolution 119 0 1 2 3 4 5 6 7 8 9 10 11 -3 -2 -1 0 1 2 3 0 1 2 3 4 5 6 7 8 9 10 11 -3 -2 -1 0 1 2 3 0 1 2 3 4 5 6 7 8 -3 -2 -1 0 1 2 3 -3 -2 -1 0 3.0 2.0 1.0 0.0 1.0 2.0 3.0 x[n] y[n] h[n] 0 -1 -2 -3 2 3 1 (flipped) c. Set to calculate y[8] 0 1 2 3 4 5 6 7 8 -3 -2 -1 0 1 2 3 -3 -2 -1 0 3.0 2.0 1.0 0.0 1.0 2.0 3.0 x[n] y[n] h[n] 0 -1 -2 -3 2 3 1 (flipped) d. Set to calculate y[11] Figure 6-9 (continued) The important part is that the far left and far right samples in the output signal are based on incomplete information. In DSP jargon, the impulse response is not fully immersed in the input signal. If the impulse response is M points in length, the first and last M&1 samples in the output signal are based on less information than the samples between. This is analogous to an electronic circuit requiring a certain amount of time to stabilize after the power is applied. The difference is that this transient is easy to ignore in electronics, but very prominent in DSP. Figure 6-10 shows an example of the trouble these end effects can cause. The input signal is a sine wave plus a DC component. The desire is to remove the DC part of the signal, while leaving the sine wave intact. This calls for a highpass filter, such as the impulse response shown in the figure. The problem is, the first and last 30 points are a mess! The shape of these end regions can be understood by imagining the input signal padded with 30 zeros on the left side, samples x[&1] through x[&30] , and 30 zeros on the right, samples x[81] through x[110] . The output signal can then be viewed as a filtered version of this longer waveform. These "end effect" problems are widespread in 120 The Scientist and Engineer's Guide to Digital Signal Processing EQUATION 6-1 The convolution summation. This is the formal definition of convolution, written in the shorthand: y [n] ’ x [n] t h[n]. In this equation, h[n] is an M point signal with indexes running from 0 to M-1. y [i ] ’ jM&1 j ’0 h[ j ] x [i&j ] DSP. As a general rule, expect that the beginning and ending samples in processed signals will be quite useless. Now the math. Using the convolution machine as a guideline, we can write the standard equation for convolution. If x[n] is an N point signal running from 0 to N-1, and h[n] is an M point signal running from 0 to M-1, the convolution of the two: y[n] ’ x[n] t h[n], is an N+M-1 point signal running from 0 to N+M-2, given by: This equation is called the convolution sum. It allows each point in the output signal to be calculated independently of all other points in the output signal. The index, i, determines which sample in the output signal is being calculated, and therefore corresponds to the left-right position of the convolution machine. In computer programs performing convolution, a loop makes this index run through each sample in the output signal. To calculate one of the output samples, the index, j, is used inside of the convolution machine. As j runs through 0 to M-1, each sample in the impulse response, h[ j], is multiplied by the proper sample from the input signal, x[i& j ]. All these products are added to produce the output sample being calculated. Study Eq. 6-1 until you fully understand how it is implemented by the convolution machine. Much of DSP is based on this equation. (Don't be confused by the n in y[n] ’ x[n] t h[n]. This is merely a place holder to indicate that some variable is the index into the array. Sometimes the equations are written: y[ ] ’ x[ ] t h[ ], just to avoid having to bring in a meaningless symbol). Table 6-2 shows a program for performing convolutions using the output side algorithm, a direct use of Eq. 6-1. This program produces the same output signal as the program for the input side algorithm, shown previously in Table 6-1. Notice the main difference between these two programs: the input side algorithm loops through each sample in the input signal (line 220 of Table 6- 1), while the output side algorithm loops through each sample in the output signal (line 180 of Table 6-2). Here is a detailed operation of this program. The FOR-NEXT loop in lines 180 to 250 steps through each sample in the output signal, using I% as the index. For each of these values, an inner loop, composed of lines 200 to 230, calculates the value of the output sample, Y[I%]. The value of Y[I%] is set to zero in line 190, allowing it to accumulate the products inside of the convolution machine. The FOR-NEXT loop in lines 200 to 240 provide a direct implementation of Eq. 6-1. The index, J%, steps through each Chapter 6- Convolution 121 sample in the impulse response. Line 230 provides the multiplication of each sample in the impulse response, H[J%], with the appropriate sample from the input signal, X[I%-J%], and adds the result to the accumulator. In line 230, the sample taken from the input signal is: X[I%-J%]. Lines 210 and 220 prevent this from being outside the defined array, X[0] to X[80]. In other words, this program handles undefined samples in the input signal by ignoring them. Another alternative would be to define the input signal's array from X[-30] to X[110], allowing 30 zeros to be padded on each side of the true data. As a third alternative, the FOR-NEXT loop in line 180 could be changed to run from 30 to 80, rather than 0 to 110. That is, the program would only calculate the samples in the output signal where the impulse response is fully immersed in the input signal. The important thing is that you must use one of these three techniques. If you don't, the program will crash when it tries to read the out-of-bounds data. S 0 10 20 30 -0.5 0.0 0.5 1.0 1.5 Sample number 0 10 20 30 40 50 60 70 80 -4 -2 0 2 4 Sample number 0 10 20 30 40 50 60 70 80 90 100 110 -4 -2 0 2 4 Input signal Impulse response Output signal unusable usable unusable Sample number Amplitude Amplitude Amplitude FIGURE 6-10 End effects in convolution. When an input signal is convolved with an M point impulse response, the first and last M-1 points in the output signal may not be usable. In this example, the impulse response is a high-pass filter used to remove the DC component from the input signal. 100 'CONVOLUTION USING THE OUTPUT SIDE ALGORITHM 110 ' 120 DIM X[80] 'The input signal, 81 points 130 DIM H[30] 'The impulse response, 31 points 140 DIM Y[110] 'The output signal, 111 points 150 ' 160 GOSUB XXXX 'Mythical subroutine to load X[ ] and H[ ] 170 ' 180 FOR I% = 0 TO 110 'Loop for each point in Y[ ] 190 Y[I%] = 0 'Zero the sample in the output array 200 FOR J% = 0 TO 30 'Loop for each point in H[ ] 210 IF (I%-J% < 0) THEN GOTO 240 220 IF (I%-J% > 80) THEN GOTO 240 230 Y(I%) = Y(I%) + H(J%) t X(I%-J%) 240 NEXT J% 250 NEXT I% 260 ' 270 GOSUB XXXX 'Mythical subroutine to store Y[ ] 280 ' 290 END TABLE 6-2 122 The Scientist and Engineer's Guide to Digital Signal Processing The Sum of Weighted Inputs The characteristics of a linear system are completely described by its impulse response. This is the basis of the input side algorithm: each point in the input signal contributes a scaled and shifted version of the impulse response to the output signal. The mathematical consequences of this lead to the output side algorithm: each point in the output signal receives a contribution from many points in the input signal, multiplied by a flipped impulse response. While this is all true, it doesn't provide the full story on why convolution is important in signal processing. Look back at the convolution machine in Fig. 6-8, and ignore that the signal inside the dotted box is an impulse response. Think of it as a set of weighing coefficients that happen to be embedded in the flow diagram. In this view, each sample in the output signal is equal to a sum of weighted inputs. Each sample in the output is influenced by a region of samples in the input signal, as determined by what the weighing coefficients are chosen to be. For example, imagine there are ten weighing coefficients, each with a value of onetenth. This makes each sample in the output signal the average of ten samples from the input. Taking this further, the weighing coefficients do not need to be restricted to the left side of the output sample being calculated. For instance, Fig. 6-8 shows y[6] being calculated from: x[3], x[4], x[5], and x[6] . Viewing the convolution machine as a sum of weighted inputs, the weighing coefficients could be chosen symmetrically around the output sample. For example, y[6] might receive contributions from: x[4], x[5], x[6], x[7], and x[8] . Using the same indexing notation as in Fig. 6-8, the weighing coefficients for these five inputs would be held in: h[2], h[1], h[0], h[&1], and h[&2] . In other words, the impulse response that corresponds to our selection of symmetrical weighing coefficients requires the use of negative indexes. We will return to this in the next chapter. Mathematically, there is only one concept here: convolution as defined by Eq. 6-1. However, science and engineering problems approach this single concept from two distinct directions. Sometimes you will want to think of a system in terms of what its impulse response looks like. Other times you will understand the system as a set of weighing coefficients. You need to become familiar with both views, and how to toggle between them. Digital Signal Processors Digital Signal Processing is carried out by mathematical operations. In comparison, word processing and similar programs merely rearrange stored data. This means that computers designed for business and other general applications are not optimized for algorithms such as digital filtering and Fourier analysis. Digital Signal Processors are microprocessors specifically designed to handle Digital Signal Processing tasks. These devices have seen tremendous growth in the last decade, finding use in everything from cellular telephones to advanced scientific instruments. In fact, hardware engineers use "DSP" to mean Digital Signal Processor, just as algorithm developers use "DSP" to mean Digital Signal Processing. This chapter looks at how DSPs are different from other types of microprocessors, how to decide if a DSP is right for your application, and how to get started in this exciting new field. In the next chapter we will take a more detailed look at one of these sophisticated products: the Analog Devices SHARC® family. How DSPs are Different from Other Microprocessors In the 1960s it was predicted that artificial intelligence would revolutionize the way humans interact with computers and other machines. It was believed that by the end of the century we would have robots cleaning our houses, computers driving our cars, and voice interfaces controlling the storage and retrieval of information. This hasn't happened; these abstract tasks are far more complicated than expected, and very difficult to carry out with the step-by-step logic provided by digital computers. However, the last forty years have shown that computers are extremely capable in two broad areas, (1) data manipulation, such as word processing and database management, and (2) mathematical calculation, used in science, engineering, and Digital Signal Processing. All microprocessors can perform both tasks; however, it is difficult (expensive) to make a device that is optimized for both. There are technical tradeoffs in the hardware design, such as the size of the instruction set and how interrupts are handled. Even 504 The Scientist and Engineer's Guide to Digital Signal Processing Data Manipulation Math Calculation Word processing, database management, spread sheets, operating sytems, etc. Digital Signal Processing, motion control, scientific and engineering simulations, etc. data movement (A º B) value testing (If A=B then ...) addition (A+B=C ) multiplication (A×B=C ) Typical Applications Main Operations FIGURE 28-1 Data manipulation versus mathematical calculation. Digital computers are useful for two general tasks: data manipulation and mathematical calculation. Data manipulation is based on moving data and testing inequalities, while mathematical calculation uses multiplication and addition. more important, there are marketing issues involved: development and manufacturing cost, competitive position, product lifetime, and so on. As a broad generalization, these factors have made traditional microprocessors, such as the Pentium®, primarily directed at data manipulation. Similarly, DSPs are designed to perform the mathematical calculations needed in Digital Signal Processing. Figure 28-1 lists the most important differences between these two categories. Data manipulation involves storing and sorting information. For instance, consider a word processing program. The basic task is to store the information (typed in by the operator), organize the information (cut and paste, spell checking, page layout, etc.), and then retrieve the information (such as saving the document on a floppy disk or printing it with a laser printer). These tasks are accomplished by moving data from one location to another, and testing for inequalities (A=B, AB THEN ...). Second, if the two entries are not in alphabetical order, switch them so that they are (AWB). When this two step process is repeated many times on all adjacent pairs, the list will eventually become alphabetized. As another example, consider how a document is printed from a word processor. The computer continually tests the input device (mouse or keyboard) for the binary code that indicates "print the document." When this code is detected, the program moves the data from the computer's memory to the printer. Here we have the same two basic operations: moving data and inequality testing. While mathematics is occasionally used in this type of Chapter 28- Digital Signal Processors 505 y[n] ’ a0 x[n] % a1 x[n&1] % a2 x[n&2] % a3 x[n&3] % a4 x[n&4] % þ ×a0 ×a1 ×a2 ×a3 ×a4 ×a5 ×a6 ×a7 Input Signal, x[ ] Output signal, y[ ] x[n] x[n-1] x[n-2] x[n-3] y[n] FIGURE 28-2 FIR digital filter. In FIR filtering, each sample in the output signal, y[n], is found by multiplying samples from the input signal, x[n], x[n-1], x[n-2], ..., by the filter kernel coefficients, a0, a1, a2, a3 ..., and summing the products. application, it is infrequent and does not significantly affect the overall execution speed. In comparison, the execution speed of most DSP algorithms is limited almost completely by the number of multiplications and additions required. For example, Fig. 28-2 shows the implementation of an FIR digital filter, the most common DSP technique. Using the standard notation, the input signal is referred to by x[ ], while the output signal is denoted by y[ ]. Our task is to calculate the sample at location n in the output signal, i.e., y[n] . An FIR filter performs this calculation by multiplying appropriate samples from the input signal by a group of coefficients, denoted by: a , and then adding 0, a1, a2, a3,þ the products. In equation form, y[n] is found by: This is simply saying that the input signal has been convolved with a filter kernel (i.e., an impulse response) consisting of: a . Depending on 0, a1, a2, a3,þ the application, there may only be a few coefficients in the filter kernel, or many thousands. While there is some data transfer and inequality evaluation in this algorithm, such as to keep track of the intermediate results and control the loops, the math operations dominate the execution time. 506 The Scientist and Engineer's Guide to Digital Signal Processing In addition to preforming mathematical calculations very rapidly, DSPs must also have a predictable execution time. Suppose you launch your desktop computer on some task, say, converting a word-processing document from one form to another. It doesn't matter if the processing takes ten milliseconds or ten seconds; you simply wait for the action to be completed before you give the computer its next assignment. In comparison, most DSPs are used in applications where the processing is continuous, not having a defined start or end. For instance, consider an engineer designing a DSP system for an audio signal, such as a hearing aid. If the digital signal is being received at 20,000 samples per second, the DSP must be able to maintain a sustained throughput of 20,000 samples per second. However, there are important reasons not to make it any faster than necessary. As the speed increases, so does the cost, the power consumption, the design difficulty, and so on. This makes an accurate knowledge of the execution time critical for selecting the proper device, as well as the algorithms that can be applied. Circular Buffering Digital Signal Processors are designed to quickly carry out FIR filters and similar techniques. To understand the hardware, we must first understand the algorithms. In this section we will make a detailed list of the steps needed to implement an FIR filter. In the next section we will see how DSPs are designed to perform these steps as efficiently as possible. To start, we need to distinguish between off-line processing and real-time processing. In off-line processing, the entire input signal resides in the computer at the same time. For example, a geophysicist might use a seismometer to record the ground movement during an earthquake. After the shaking is over, the information may be read into a computer and analyzed in some way. Another example of off-line processing is medical imaging, such as computed tomography and MRI. The data set is acquired while the patient is inside the machine, but the image reconstruction may be delayed until a later time. The key point is that all of the information is simultaneously available to the processing program. This is common in scientific research and engineering, but not in consumer products. Off-line processing is the realm of personal computers and mainframes. In real-time processing, the output signal is produced at the same time that the input signal is being acquired. For example, this is needed in telephone communication, hearing aids, and radar. These applications must have the information immediately available, although it can be delayed by a short amount. For instance, a 10 millisecond delay in a telephone call cannot be detected by the speaker or listener. Likewise, it makes no difference if a radar signal is delayed by a few seconds before being displayed to the operator. Real-time applications input a sample, perform the algorithm, and output a sample, over-and-over. Alternatively, they may input a group Chapter 28- Digital Signal Processors 507 x[n-3] x[n-2] x[n-1] x[n] x[n-6] x[n-5] x[n-4] x[n-7] 20040 20041 20042 20043 20044 20045 20046 20047 20048 20049 -0.225767 -0.269847 -0.228918 -0.113940 -0.048679 -0.222977 -0.371370 -0.462791 ADDRESS VALUE newest sample oldest sample MEMORY STORED x[n-4] x[n-3] x[n-2] x[n-1] x[n-7] x[n-6] x[n-5] x[n] 20040 20041 20042 20043 20044 20045 20046 20047 20048 20049 -0.225767 -0.269847 -0.228918 -0.113940 -0.062222 -0.222977 -0.371370 -0.462791 ADDRESS VALUE newest sample oldest sample MEMORY STORED a. Circular buffer at some instant b. Circular buffer after next sample FIGURE 28-3 Circular buffer operation. Circular buffers are used to store the most recent values of a continually updated signal. This illustration shows how an eight sample circular buffer might appear at some instant in time (a), and how it would appear one sample later (b). of samples, perform the algorithm, and output a group of samples. This is the world of Digital Signal Processors. Now look back at Fig. 28-2 and imagine that this is an FIR filter being implemented in real-time. To calculate the output sample, we must have access to a certain number of the most recent samples from the input. For example, suppose we use eight coefficients in this filter, a . This means we 0, a1, þ a7 must know the value of the eight most recent samples from the input signal, x[n], x[n&1], þ x[n&7] . These eight samples must be stored in memory and continually updated as new samples are acquired. What is the best way to manage these stored samples? The answer is circular buffering. Figure 28-3 illustrates an eight sample circular buffer. We have placed this circular buffer in eight consecutive memory locations, 20041 to 20048. Figure (a) shows how the eight samples from the input might be stored at one particular instant in time, while (b) shows the changes after the next sample is acquired. The idea of circular buffering is that the end of this linear array is connected to its beginning; memory location 20041 is viewed as being next to 20048, just as 20044 is next to 20045. You keep track of the array by a pointer (a variable whose value is an address) that indicates where the most recent sample resides. For instance, in (a) the pointer contains the address 20044, while in (b) it contains 20045. When a new sample is acquired, it replaces the oldest sample in the array, and the pointer is moved one address ahead. Circular buffers are efficient because only one value needs to be changed when a new sample is acquired. Four parameters are needed to manage a circular buffer. First, there must be a pointer that indicates the start of the circular buffer in memory (in this example, 20041). Second, there must be a pointer indicating the end of the 508 The Scientist and Engineer's Guide to Digital Signal Processing 1. Obtain a sample with the ADC; generate an interrupt 2. Detect and manage the interrupt 3. Move the sample into the input signal's circular buffer 4. Update the pointer for the input signal's circular buffer 5. Zero the accumulator 6. Control the loop through each of the coefficients 7. Fetch the coefficient from the coefficient's circular buffer 8. Update the pointer for the coefficient's circular buffer 9. Fetch the sample from the input signal's circular buffer 10. Update the pointer for the input signal's circular buffer 11. Multiply the coefficient by the sample 12. Add the product to the accumulator 13. Move the output sample (accumulator) to a holding buffer 14. Move the output sample from the holding buffer to the DAC TABLE 28-1 FIR filter steps. array (e.g., 20048), or a variable that holds its length (e.g., 8). Third, the step size of the memory addressing must be specified. In Fig. 28-3 the step size is one, for example: address 20043 contains one sample, address 20044 contains the next sample, and so on. This is frequently not the case. For instance, the addressing may refer to bytes, and each sample may require two or four bytes to hold its value. In these cases, the step size would need to be two or four, respectively. These three values define the size and configuration of the circular buffer, and will not change during the program operation. The fourth value, the pointer to the most recent sample, must be modified as each new sample is acquired. In other words, there must be program logic that controls how this fourth value is updated based on the value of the first three values. While this logic is quite simple, it must be very fast. This is the whole point of this discussion; DSPs should be optimized at managing circular buffers to achieve the highest possible execution speed. As an aside, circular buffering is also useful in off-line processing. Consider a program where both the input and the output signals are completely contained in memory. Circular buffering isn't needed for a convolution calculation, because every sample can be immediately accessed. However, many algorithms are implemented in stages, with an intermediate signal being created between each stage. For instance, a recursive filter carried out as a series of biquads operates in this way. The brute force method is to store the entire length of each intermediate signal in memory. Circular buffering provides another option: store only those intermediate samples needed for the calculation at hand. This reduces the required amount of memory, at the expense of a more complicated algorithm. The important idea is that circular buffers are useful for off-line processing, but critical for real-time applications. Now we can look at the steps needed to implement an FIR filter using circular buffers for both the input signal and the coefficients. This list may seem trivial and overexamined- it's not! The efficient handling of these individual tasks is what separates a DSP from a traditional microprocessor. For each new sample, all the following steps need to be taken: Chapter 28- Digital Signal Processors 509 The goal is to make these steps execute quickly. Since steps 6-12 will be repeated many times (once for each coefficient in the filter), special attention must be given to these operations. Traditional microprocessors must generally carry out these 14 steps in serial (one after another), while DSPs are designed to perform them in parallel. In some cases, all of the operations within the loop (steps 6-12) can be completed in a single clock cycle. Let's look at the internal architecture that allows this magnificent performance. Architecture of the Digital Signal Processor One of the biggest bottlenecks in executing DSP algorithms is transferring information to and from memory. This includes data, such as samples from the input signal and the filter coefficients, as well as program instructions, the binary codes that go into the program sequencer. For example, suppose we need to multiply two numbers that reside somewhere in memory. To do this, we must fetch three binary values from memory, the numbers to be multiplied, plus the program instruction describing what to do. Figure 28-4a shows how this seemingly simple task is done in a traditional microprocessor. This is often called a Von Neumann architecture, after the brilliant American mathematician John Von Neumann (1903-1957). Von Neumann guided the mathematics of many important discoveries of the early twentieth century. His many achievements include: developing the concept of a stored program computer, formalizing the mathematics of quantum mechanics, and work on the atomic bomb. If it was new and exciting, Von Neumann was there! As shown in (a), a Von Neumann architecture contains a single memory and a single bus for transferring data into and out of the central processing unit (CPU). Multiplying two numbers requires at least three clock cycles, one to transfer each of the three numbers over the bus from the memory to the CPU. We don't count the time to transfer the result back to memory, because we assume that it remains in the CPU for additional manipulation (such as the sum of products in an FIR filter). The Von Neumann design is quite satisfactory when you are content to execute all of the required tasks in serial. In fact, most computers today are of the Von Neumann design. We only need other architectures when very fast processing is required, and we are willing to pay the price of increased complexity. This leads us to the Harvard architecture, shown in (b). This is named for the work done at Harvard University in the 1940s under the leadership of Howard Aiken (1900-1973). As shown in this illustration, Aiken insisted on separate memories for data and program instructions, with separate buses for each. Since the buses operate independently, program instructions and data can be fetched at the same time, improving the speed over the single bus design. Most present day DSPs use this dual bus architecture. Figure (c) illustrates the next level of sophistication, the Super Harvard Architecture. This term was coined by Analog Devices to describe the 510 The Scientist and Engineer's Guide to Digital Signal Processing internal operation of their ADSP-2106x and new ADSP-211xx families of Digital Signal Processors. These are called SHARC® DSPs, a contraction of the longer term, Super Harvard ARChitecture. The idea is to build upon the Harvard architecture by adding features to improve the throughput. While the SHARC DSPs are optimized in dozens of ways, two areas are important enough to be included in Fig. 28-4c: an instruction cache, and an I/O controller. First, let's look at how the instruction cache improves the performance of the Harvard architecture. A handicap of the basic Harvard design is that the data memory bus is busier than the program memory bus. When two numbers are multiplied, two binary values (the numbers) must be passed over the data memory bus, while only one binary value (the program instruction) is passed over the program memory bus. To improve upon this situation, we start by relocating part of the "data" to program memory. For instance, we might place the filter coefficients in program memory, while keeping the input signal in data memory. (This relocated data is called "secondary data" in the illustration). At first glance, this doesn't seem to help the situation; now we must transfer one value over the data memory bus (the input signal sample), but two values over the program memory bus (the program instruction and the coefficient). In fact, if we were executing random instructions, this situation would be no better at all. However, DSP algorithms generally spend most of their execution time in loops, such as instructions 6-12 of Table 28-1. This means that the same set of program instructions will continually pass from program memory to the CPU. The Super Harvard architecture takes advantage of this situation by including an instruction cache in the CPU. This is a small memory that contains about 32 of the most recent program instructions. The first time through a loop, the program instructions must be passed over the program memory bus. This results in slower operation because of the conflict with the coefficients that must also be fetched along this path. However, on additional executions of the loop, the program instructions can be pulled from the instruction cache. This means that all of the memory to CPU information transfers can be accomplished in a single cycle: the sample from the input signal comes over the data memory bus, the coefficient comes over the program memory bus, and the program instruction comes from the instruction cache. In the jargon of the field, this efficient transfer of data is called a high memoryaccess bandwidth. Figure 28-5 presents a more detailed view of the SHARC architecture, showing the I/O controller connected to data memory. This is how the signals enter and exit the system. For instance, the SHARC DSPs provides both serial and parallel communications ports. These are extremely high speed connections. For example, at a 40 MHz clock speed, there are two serial ports that operate at 40 Mbits/second each, while six parallel ports each provide a 40 Mbytes/second data transfer. When all six parallel ports are used together, the data transfer rate is an incredible 240 Mbytes/second. Chapter 28- Digital Signal Processors 511 Memory data and instructions Program Memory Data Memory instructions and secondary data data only Program Memory Data Memory instructions only data only a. Von Neumann Architecture ( ) b. Harvard Architecture ( ) c. Super Harvard Architecture ( ) address bus CPU data bus PM address bus PM data bus PM address bus PM data bus DM address bus DM data bus CPU DM address bus DM data bus single memory dual memory dual memory, instruction cache, I/O controller Instruction Cache CPU I/O Controller data FIGURE 28-4 Microprocessor architecture. The Von Neumann architecture uses a single memory to hold both data and instructions. In comparison, the Harvard architecture uses separate memories for data and instructions, providing higher speed. The Super Harvard Architecture improves upon the Harvard design by adding an instruction cache and a dedicated I/O controller. This is fast enough to transfer the entire text of this book in only 2 milliseconds! Just as important, dedicated hardware allows these data streams to be transferred directly into memory (Direct Memory Access, or DMA), without having to pass through the CPU's registers. In other words, tasks 1 & 14 on our list happen independently and simultaneously with the other tasks; no cycles are stolen from the CPU. The main buses (program memory bus and data memory bus) are also accessible from outside the chip, providing an additional interface to off-chip memory and peripherals. This allows the SHARC DSPs to use a four Gigaword (16 Gbyte) memory, accessible at 40 Mwords/second (160 Mbytes/second), for 32 bit data. Wow! This type of high speed I/O is a key characteristic of DSPs. The overriding goal is to move the data in, perform the math, and move the data out before the next sample is available. Everything else is secondary. Some DSPs have onboard analog-to-digital and digital-to-analog converters, a feature called mixed signal. However, all DSPs can interface with external converters through serial or parallel ports. 512 The Scientist and Engineer's Guide to Digital Signal Processing Now let's look inside the CPU. At the top of the diagram are two blocks labeled Data Address Generator (DAG), one for each of the two memories. These control the addresses sent to the program and data memories, specifying where the information is to be read from or written to. In simpler microprocessors this task is handled as an inherent part of the program sequencer, and is quite transparent to the programmer. However, DSPs are designed to operate with circular buffers, and benefit from the extra hardware to manage them efficiently. This avoids needing to use precious CPU clock cycles to keep track of how the data are stored. For instance, in the SHARC DSPs, each of the two DAGs can control eight circular buffers. This means that each DAG holds 32 variables (4 per buffer), plus the required logic. Why so many circular buffers? Some DSP algorithms are best carried out in stages. For instance, IIR filters are more stable if implemented as a cascade of biquads (a stage containing two poles and up to two zeros). Multiple stages require multiple circular buffers for the fastest operation. The DAGs in the SHARC DSPs are also designed to efficiently carry out the Fast Fourier transform. In this mode, the DAGs are configured to generate bit-reversed addresses into the circular buffers, a necessary part of the FFT algorithm. In addition, an abundance of circular buffers greatly simplifies DSP code generation- both for the human programmer as well as high-level language compilers, such as C. The data register section of the CPU is used in the same way as in traditional microprocessors. In the ADSP-2106x SHARC DSPs, there are 16 general purpose registers of 40 bits each. These can hold intermediate calculations, prepare data for the math processor, serve as a buffer for data transfer, hold flags for program control, and so on. If needed, these registers can also be used to control loops and counters; however, the SHARC DSPs have extra hardware registers to carry out many of these functions. The math processing is broken into three sections, a multiplier, an arithmetic logic unit (ALU), and a barrel shifter. The multiplier takes the values from two registers, multiplies them, and places the result into another register. The ALU performs addition, subtraction, absolute value, logical operations (AND, OR, XOR, NOT), conversion between fixed and floating point formats, and similar functions. Elementary binary operations are carried out by the barrel shifter, such as shifting, rotating, extracting and depositing segments, and so on. A powerful feature of the SHARC family is that the multiplier and the ALU can be accessed in parallel. In a single clock cycle, data from registers 0-7 can be passed to the multiplier, data from registers 8-15 can be passed to the ALU, and the two results returned to any of the 16 registers. There are also many important features of the SHARC family architecture that aren't shown in this simplified illustration. For instance, an 80 bit accumulator is built into the multiplier to reduce the round-off error associated with multiple fixed-point math operations. Another interesting Chapter 28- Digital Signal Processors 513 Program Memory Data Memory instructions and secondary data data only Address PM Data Generator Address DM Data Generator Data Registers Muliplier ALU Shifter PM address bus DM address bus PM data bus DM data bus Program Sequencer Instruction Cache I/O Controller (DMA) High speed I/O (serial, parallel, ADC, DAC, etc.) FIGURE 28-5 Typical DSP architecture. Digital Signal Processors are designed to implement tasks in parallel. This simplified diagram is of the Analog Devices SHARC DSP. Compare this architecture with the tasks needed to implement an FIR filter, as listed in Table 28-1. All of the steps within the loop can be executed in a single clock cycle. feature is the use of shadow registers for all the CPU's key registers. These are duplicate registers that can be switched with their counterparts in a single clock cycle. They are used for fast context switching, the ability to handle interrupts quickly. When an interrupt occurs in traditional microprocessors, all the internal data must be saved before the interrupt can be handled. This usually involves pushing all of the occupied registers onto the stack, one at a time. In comparison, an interrupt in the SHARC family is handled by moving the internal data into the shadow registers in a single clock cycle. When the interrupt routine is completed, the registers are just as quickly restored. This feature allows step 4 on our list (managing the sample-ready interrupt) to be handled very quickly and efficiently. Now we come to the critical performance of the architecture, how many of the operations within the loop (steps 6-12 of Table 28-1) can be carried out at the same time. Because of its highly parallel nature, the SHARC DSP can simultaneously carry out all of these tasks. Specifically, within a single clock cycle, it can perform a multiply (step 11), an addition (step 12), two data moves (steps 7 and 9), update two circular buffer pointers (steps 8 and 10), and 514 The Scientist and Engineer's Guide to Digital Signal Processing control the loop (step 6). There will be extra clock cycles associated with beginning and ending the loop (steps 3, 4, 5 and 13, plus moving initial values into place); however, these tasks are also handled very efficiently. If the loop is executed more than a few times, this overhead will be negligible. As an example, suppose you write an efficient FIR filter program using 100 coefficients. You can expect it to require about 105 to 110 clock cycles per sample to execute (i.e., 100 coefficient loops plus overhead). This is very impressive; a traditional microprocessor requires many thousands of clock cycles for this algorithm. Fixed versus Floating Point Digital Signal Processing can be divided into two categories, fixed point and floating point. These refer to the format used to store and manipulate numbers within the devices. Fixed point DSPs usually represent each number with a minimum of 16 bits, although a different length can be used. For instance, Motorola manufactures a family of fixed point DSPs that use 24 bits. There are four common ways that these 216 ’ 65,536 possible bit patterns can represent a number. In unsigned integer, the stored number can take on any integer value from 0 to 65,535. Similarly, signed integer uses two's complement to make the range include negative numbers, from -32,768 to 32,767. With unsigned fraction notation, the 65,536 levels are spread uniformly between 0 and 1. Lastly, the signed fraction format allows negative numbers, equally spaced between -1 and 1. In comparison, floating point DSPs typically use a minimum of 32 bits to store each value. This results in many more bit patterns than for fixed point, 232 ’ 4,294,967,296 to be exact. A key feature of floating point notation is that the represented numbers are not uniformly spaced. In the most common format (ANSI/IEEE Std. 754-1985), the largest and smallest numbers are ±3.4×1038 and ±1.2×10 , respectively. The represented values are unequally &38 spaced between these two extremes, such that the gap between any two numbers is about ten-million times smaller than the value of the numbers. This is important because it places large gaps between large numbers, but small gaps between small numbers. Floating point notation is discussed in more detail in Chapter 4. All floating point DSPs can also handle fixed point numbers, a necessity to implement counters, loops, and signals coming from the ADC and going to the DAC. However, this doesn't mean that fixed point math will be carried out as quickly as the floating point operations; it depends on the internal architecture. For instance, the SHARC DSPs are optimized for both floating point and fixed point operations, and executes them with equal efficiency. For this reason, the SHARC devices are often referred to as "32-bit DSPs," rather than just "Floating Point." Figure 28-6 illustrates the primary trade-offs between fixed and floating point DSPs. In Chapter 3 we stressed that fixed point arithmetic is much Chapter 28- Digital Signal Processors 515 Precision Product Cost Development Time Floating Point Fixed Point FIGURE 28-6 Dynamic Range Fixed versus floating point. Fixed point DSPs are generally cheaper, while floating point devices have better precision, higher dynamic range, and a shorter development cycle. faster than floating point in general purpose computers. However, with DSPs the speed is about the same, a result of the hardware being highly optimized for math operations. The internal architecture of a floating point DSP is more complicated than for a fixed point device. All the registers and data buses must be 32 bits wide instead of only 16; the multiplier and ALU must be able to quickly perform floating point arithmetic, the instruction set must be larger (so that they can handle both floating and fixed point numbers), and so on. Floating point (32 bit) has better precision and a higher dynamic range than fixed point (16 bit) . In addition, floating point programs often have a shorter development cycle, since the programmer doesn't generally need to worry about issues such as overflow, underflow, and round-off error. On the other hand, fixed point DSPs have traditionally been cheaper than floating point devices. Nothing changes more rapidly than the price of electronics; anything you find in a book will be out-of-date before it is printed. Nevertheless, cost is a key factor in understanding how DSPs are evolving, and we need to give you a general idea. When this book was completed in 1999, fixed point DSPs sold for between $5 and $100, while floating point devices were in the range of $10 to $300. This difference in cost can be viewed as a measure of the relative complexity between the devices. If you want to find out what the prices are today, you need to look today. Now let's turn our attention to performance; what can a 32-bit floating point system do that a 16-bit fixed point can't? The answer to this question is signal-to-noise ratio. Suppose we store a number in a 32 bit floating point format. As previously mentioned, the gap between this number and its adjacent neighbor is about one ten-millionth of the value of the number. To store the number, it must be round up or down by a maximum of one-half the gap size. In other words, each time we store a number in floating point notation, we add noise to the signal. The same thing happens when a number is stored as a 16-bit fixed point value, except that the added noise is much worse. This is because the gaps between adjacent numbers are much larger. For instance, suppose we store the number 10,000 as a signed integer (running from -32,768 to 32,767). The gap between numbers is one ten-thousandth of the value of the number we are storing. If we 516 The Scientist and Engineer's Guide to Digital Signal Processing want to store the number 1000, the gap between numbers is only one onethousandth of the value. Noise in signals is usually represented by its standard deviation. This was discussed in detail in Chapter 2. For here, the important fact is that the standard deviation of this quantization noise is about one-third of the gap size. This means that the signal-to-noise ratio for storing a floating point number is about 30 million to one, while for a fixed point number it is only about ten-thousand to one. In other words, floating point has roughly 30,000 times less quantization noise than fixed point. This brings up an important way that DSPs are different from traditional microprocessors. Suppose we implement an FIR filter in fixed point. To do this, we loop through each coefficient, multiply it by the appropriate sample from the input signal, and add the product to an accumulator. Here's the problem. In traditional microprocessors, this accumulator is just another 16 bit fixed point variable. To avoid overflow, we need to scale the values being added, and will correspondingly add quantization noise on each step. In the worst case, this quantization noise will simply add, greatly lowering the signalto- noise ratio of the system. For instance, in a 500 coefficient FIR filter, the noise on each output sample may be 500 times the noise on each input sample. The signal-to-noise ratio of ten-thousand to one has dropped to a ghastly twenty to one. Although this is an extreme case, it illustrates the main point: when many operations are carried out on each sample, it's bad, really bad. See Chapter 3 for more details. DSPs handle this problem by using an extended precision accumulator. This is a special register that has 2-3 times as many bits as the other memory locations. For example, in a 16 bit DSP it may have 32 to 40 bits, while in the SHARC DSPs it contains 80 bits for fixed point use. This extended range virtually eliminates round-off noise while the accumulation is in progress. The only round-off error suffered is when the accumulator is scaled and stored in the 16 bit memory. This strategy works very well, although it does limit how some algorithms must be carried out. In comparison, floating point has such low quantization noise that these techniques are usually not necessary. In addition to having lower quantization noise, floating point systems are also easier to develop algorithms for. Most DSP techniques are based on repeated multiplications and additions. In fixed point, the possibility of an overflow or underflow needs to be considered after each operation. The programmer needs to continually understand the amplitude of the numbers, how the quantization errors are accumulating, and what scaling needs to take place. In comparison, these issues do not arise in floating point; the numbers take care of themselves (except in rare cases). To give you a better understanding of this issue, Fig. 28-7 shows a table from the SHARC user manual. This describes the ways that multiplication can be carried out for both fixed and floating point formats. First, look at how floating point numbers can be multiplied; there is only one way! That Chapter 28- Digital Signal Processors 517 Rn MRF MRB Rn Rn MRF MRB Rn Rn MRF MRB Rn Rn MRF MRB Rn Rn MRF MRB MRF MRB MRxF MRxB Rn = MRF = MRB = MRF = MRB = MRF = MRB = MRF = MRB = SAT MRF = SAT MRB = SAT MRF = SAT MRB = RND MRF = RND MRB = RND MRF = RND MRB = 0 = Rn = MRxF MRxB = Rx * Ry + Rx * Ry - Rx * Ry S S F U U I FR S S (SI) (UI) (SF) (UF) (SF) (UF) ) S F U U I FR ) S F U U I FR ) Fn = Fx * Fy Fixed Point Floating Point ( ( ( FIGURE 28-7 Fixed versus floating point instructions. These are the multiplication instructions used in the SHARC DSPs. While only a single command is needed for floating point, many options are needed for fixed point. See the text for an explanation of these options. is, Fn = Fx * Fy, where Fn, Fx, and Fy are any of the 16 data registers. It could not be any simpler. In comparison, look at all the possible commands for fixed point multiplication. These are the many options needed to efficiently handle the problems of round-off, scaling, and format. In Fig. 28-7, Rn, Rx, and Ry refer to any of the 16 data registers, and MRF and MRB are 80 bit accumulators. The vertical lines indicate options. For instance, the top-left entry in this table means that all the following are valid commands: Rn = Rx * Ry, MRF = Rx * Ry, and MRB = Rx * Ry. In other words, the value of any two registers can be multiplied and placed into another register, or into one of the extended precision accumulators. This table also shows that the numbers may be either signed or unsigned (S or U), and may be fractional or integer (F or I). The RND and SAT options are ways of controlling rounding and register overflow. 518 The Scientist and Engineer's Guide to Digital Signal Processing There are other details and options in the table, but they are not important for our present discussion. The important idea is that the fixed point programmer must understand dozens of ways to carry out the very basic task of multiplication. In contrast, the floating point programmer can spend his time concentrating on the algorithm. Given these tradeoffs between fixed and floating point, how do you choose which to use? Here are some things to consider. First, look at how many bits are used in the ADC and DAC. In many applications, 12-14 bits per sample is the crossover for using fixed versus floating point. For instance, television and other video signals typically use 8 bit ADC and DAC, and the precision of fixed point is acceptable. In comparison, professional audio applications can sample with as high as 20 or 24 bits, and almost certainly need floating point to capture the large dynamic range. The next thing to look at is the complexity of the algorithm that will be run. If it is relatively simple, think fixed point; if it is more complicated, think floating point. For example, FIR filtering and other operations in the time domain only require a few dozen lines of code, making them suitable for fixed point. In contrast, frequency domain algorithms, such as spectral analysis and FFT convolution, are very detailed and can be much more difficult to program. While they can be written in fixed point, the development time will be greatly reduced if floating point is used. Lastly, think about the money: how important is the cost of the product, and how important is the cost of the development? When fixed point is chosen, the cost of the product will be reduced, but the development cost will probably be higher due to the more difficult algorithms. In the reverse manner, floating point will generally result in a quicker and cheaper development cycle, but a more expensive final product. Figure 28-8 shows some of the major trends in DSPs. Figure (a) illustrates the impact that Digital Signal Processors have had on the embedded market. These are applications that use a microprocessor to directly operate and control some larger system, such as a cellular telephone, microwave oven, or automotive instrument display panel. The name "microcontroller" is often used in referring to these devices, to distinguish them from the microprocessors used in personal computers. As shown in (a), about 38% of embedded designers have already started using DSPs, and another 49% are considering the switch. The high throughput and computational power of DSPs often makes them an ideal choice for embedded designs. As illustrated in (b), about twice as many engineers currently use fixed point as use floating point DSPs. However, this depends greatly on the application. Fixed point is more popular in competitive consumer products where the cost of the electronics must be kept very low. A good example of this is cellular telephones. When you are in competition to sell millions of your product, a cost difference of only a few dollars can be the difference between success and failure. In comparison, floating point is more common when greater performance is needed and cost is not important. For Chapter 28- Digital Signal Processors 519 No Plans Floating Point Next Year in 2000 Next Fixed Point Migrate Migrate Migrate Design b. DSP currently used c. Migration to floating point Considering Changed Considering Have Already Not a. Changing from uProc to DSP FIGURE 28-8 Major trends in DSPs. As illustrated in (a), about 38% of embedded designers have already switched from conventional microprocessors to DSPs, and another 49% are considering the change. In (b), about twice as many engineers use fixed point as use floating point DSPs. This is mainly driven by consumer products that must have low cost electronics, such as cellular telephones. However, as shown in (c), floating point is the fastest growing segment; over one-half of engineers currently using 16 bit devices plan to migrate to floating point DSPs instance, suppose you are designing a medical imaging system, such a computed tomography scanner. Only a few hundred of the model will ever be sold, at a price of several hundred-thousand dollars each. For this application, the cost of the DSP is insignificant, but the performance is critical. In spite of the larger number of fixed point DSPs being used, the floating point market is the fastest growing segment. As shown in (c), over one-half of engineers using 16-bits devices plan to migrate to floating point at some time in the near future. Before leaving this topic, we should reemphasize that floating point and fixed point usually use 32 bits and 16 bits, respectively, but not always. For 520 The Scientist and Engineer's Guide to Digital Signal Processing instance, the SHARC family can represent numbers in 32-bit fixed point, a mode that is common in digital audio applications. This makes the 232 quantization levels spaced uniformly over a relatively small range, say, between -1 and 1. In comparison, floating point notation places the 232 quantization levels logarithmically over a huge range, typically ±3.4×1038. This gives 32-bit fixed point better precision, that is, the quantization error on any one sample will be lower. However, 32-bit floating point has a higher dynamic range, meaning there is a greater difference between the largest number and the smallest number that can be represented. C versus Assembly DSPs are programmed in the same languages as other scientific and engineering applications, usually assembly or C. Programs written in assembly can execute faster, while programs written in C are easier to develop and maintain. In traditional applications, such as programs run on personal computers and mainframes, C is almost always the first choice. If assembly is used at all, it is restricted to short subroutines that must run with the utmost speed. This is shown graphically in Fig. 28-9a; for every traditional programmer that works in assembly, there are approximately ten that use C. However, DSP programs are different from traditional software tasks in two important respects. First, the programs are usually much shorter, say, onehundred lines versus ten-thousand lines. Second, the execution speed is often a critical part of the application. After all, that's why someone uses a DSP in the first place, for its blinding speed. These two factors motivate many software engineers to switch from C to assembly for programming Digital Signal Processors. This is illustrated in (b); nearly as many DSP programmers use assembly as use C. Figure (c) takes this further by looking at the revenue produced by DSP products. For every dollar made with a DSP programmed in C, two dollars are made with a DSP programmed in assembly. The reason for this is simple; money is made by outperforming the competition. From a pure performance standpoint, such as execution speed and manufacturing cost, assembly almost always has the advantage over C. For instance, C code usually requires a larger memory than assembly, resulting in more expensive hardware. However, the DSP market is continually changing. As the market grows, manufacturers will respond by designing DSPs that are optimized for programming in C. For instance, C is much more efficient when there is a large, general purpose register set and a unified memory space. These future improvements will minimize the difference in execution time between C and assembly, and allow C to be used in more applications. To better understand this decision between C and assembly, let's look at a typical DSP task programmed in each language. The example we will use is the calculation of the dot product of the two arrays, x [ ] and y [ ]. This is a simple mathematical operation, we multiply each coefficient in one Chapter 28- Digital Signal Processors 521 Assembly C b. DSP Programmers Assembly C a. Traditional Programmers Assembly C FIGURE 28-9 c. DSP Revenue Programming in C versus assembly. As shown in (a), only about 10% of traditional programmers (such as those that work on personal computers and mainframes) use assembly. However, as illustrated in (b), assembly is much more common in Digital Signal Processors. This is because DSP programs must operate as fast as possible, and are usually quite short. Figure (c) shows that assembly is even more common in products that generate a high revenue. TABLE 28-2 Dot product in C. This progam calculates the dot product of two arrays, x[ ] and y[ ], and stores the result in the variable, result. 001 #define LEN 20 002 float dm x[LEN]; 003 float pm y[LEN]; 004 float result; 005 006 main() 007 008 { 009 int n; 010 float s; 011 for (n=0;n