|
|
|
Texas Instruments not only has been part of the Voice over IP space for a long time, but its involvement spans a wide range of technologies, from basic devices like handsets, to base stations and even more complex technologies. That complex technologies space, of course, is where companies like TI thrive, and the DSP space, in particular, is one in which TI has played a dominant role for some time. Indeed, it is a technology that spans the communications spectrum, underscoring the importance of such technologies — DSP technology can be found in consumer goods like mp3 players and mobile phones to specialized applications like medical imaging (MRI, CAT scan, etc.).
Because its technology has such wide ranging applications, TI also finds itself amid constant change and evolution as various elements in the communications space are developed and improved, and TI must ensure its products keep pace. Likewise, that same pace of change ensures that competition also maintains its edge, looking for the smallest opportunity to win market share from TI.
Rich recently spoke with Brian Glinsman, General Manager, Communications Infrastructure & Voice Business, DSP Systems at Texas Instruments. Brian runs TI’s wireless — what the company calls ‘communications infrastructure in voice’ — business. Brian explained how TI fits into the VoIP space, its history in communications, and also delves into the competitive landscape.
RT: Can you give us a brief history of TI’s involvement in the Voice over IP space?
BG: Texas Instruments has been involved from the early days, when it was all about a DSP with somebody’s software. That somebody could be Audio Codes, it could be Telogy, it could have been Hothouse… there were probably more software providers out there than I can even remember, and most of them were on TI DSPs. (Not all — there were some on
Motorola ( News - Alert)
, now Freescale, and AVI has had various entries and exits into the business, but predominately, it was a TI market back in the ‘97-‘98 timeframe.) It was really about us having the C54 core that was designed around mobile handsets and plays very well into the voice over packet market.
Then, at the end of the boom cycle, the acquisitions began. Texas Instruments bought Telogy for the software value, Broadcom bought Hothouse, Intel bought
Dialogic (News - Alert)
, and there may have been others in that time frame. As time went on, we continued to make investments; we’ve made investments in a lot of different companies, from board vendors to bring out generic blades on ATCA to other foreign factors. I probably have about 15 investments placed around the voice over packet arena to the tune of tens of millions of dollars that enable us to help get technology moving in the right direction.
But we don’t provide everything, so sometimes it’s complementary, where we will invest, making sure generic platforms are available for the customer who doesn’t want to design everything himself. Audio Codes is a good example. They predominately just buy silicon, but I really like having them around because they bring a lot of value above what I do. I can’t do everything, so they’ll go out and attack specific verticals or markets that I just can’t get enough R&D in to. The same holds for Surf, and NMS, and
Brooktrout (News - Alert)
. They are really customers, but they extend what I am able to do.
During its spending spree of the late ‘90s, TI also bought a cable modem company, a DSL company, a wireless LAN company, a company that does Bluetooth, essentially uniting all those different technologies so that we can provide everything you need in a home broadband router of any type. It has really helped us with market share on the cable side, where there are two main players —
Broadcom (News - Alert)
and TI — and our market share is well above 50%.
On the data side, Broadcom is a little bit stronger, but the nice thing is the market has been moving more toward integrated voice, and that has been pulling our overall market share up. To us, voice is very important because it’s the natural application. Everything else is great… video is wonderful, data is wonderful, but everyone wants voice communications.
As we’ve gone down this path, the two markets really becoming more polarized, whereas on the infrastructure side, people are looking for hundreds of channels on a device, and on the client side, people are looking for the complete functionality, whether it be an IP phone with video displays or a complete broadband modem that is more than just a voice channel. So, if you look at the cable investment, the DSL investment, the Telogy investment — which were certainly well over a billion dollars — that TI made in the late 90’s that have enabled them to take the applications and add them to their chips, it’s enabled TI to develop a complete targeted system on a chip for different verticals.
I would say the IP phone is probably the biggest client product, along with integrated DSL and cable modems. When you look at the world market, the real volume is either going to be integrated into the broadband device or into the desktop device, not a standalone voice adaptor. There is still a market for that, and there will continue to be, because there is always…you always want competition so that whoever your service provider is, if they are not offering you good enough service, you go out with that type of a model, but I think the real volume is in the other two.
RT: How do the VoIP market and the wireless market play together?
BG: On the handset side, it really hasn’t yet. It’s starting to, because you now have some service providers offering dual mode capabilities, which does a couple things. It reduces the minutes on the wireless infrastructure, and it also gives you better in-home coverage. So, you have a phone that has wireless LAN in it and also has GSM in it, in
T-Mobile (News - Alert)
’s case, and can switch between the two, but you have one phone number.
So there’s a couple of European companies out there that are offering in Europe — and the U.S. as well, if you have an unlocked phone — the ability to download an application that gives you two phone numbers. You have your wireless phone number, but you also get a voice number so whenever you are in a hot spot area of any type that you can log into, you can choose to go VoIP at a free or a much reduced rate, versus using the wireless, so we’re starting to see some convergence in this.
The challenge is that each service provider — T-Mobile or Verizon or Vodafone or
Vonage (News - Alert)
— has a different idea of how this should work, because they each own a different part of the network. In some of the cases, they own the broadband as well as the wireless, in others they don’t own the wireless, and in others they own only 802.11 or broadband.
So they each are coming out with different requirements and business models that, obviously, are to their benefit and, most likely, detrimental to their competitors. We’re seeing a lot of very interesting ideas, and we’re also seeing more and more of the requests to basically take a base station and put into your home router, albeit we’re still quite far away from hitting the current price points of home routers with a complete base station-in-a-box. But that is where we’re starting to see some convergence.
There’s also an Israeli company that basically does a dial-back service. When you are roaming Internationally, their little service application places your call and calls you back, so you just pay the local air rate as opposed to whatever your cellular provider charges for long distance, which typically is $1 or more per minute. So, there are lots of interesting little business models out there.
RT: What’s happening on the infrastructure side?
On the infrastructure side, we’re seeing convergence happen much faster with the media gateways. Each OEM vendor basically had a proprietary link from their radio to their voice blade, whether it was in the same box or separate. It didn’t really matter, because there was no standard interconnection from the radio protocol to the voice protocol, so if you bought a
Nokia (News - Alert)
base station or a Motorola base station, you also bought their voice by default.
With 3G and media gateways, standard transports between the radio and voice blades, again regardless of whether they are in the same box or separate, now provide the ability to utilize one vendor’s radio with a different vendor’s voice.
Also, in an effort to minimize R&D expenditures, most of the large OEMs are taking the same base platforms from the Class 4 or 5 replacement, and putting them into a media gateway. All the big vendors have either released or are developing platforms that can attack wireline or wireless with the same platform. So we’re seeing convergence there probably faster than the handset side.
RT: It sounds like there’s a lot more money in the base station too, right?
BG: Right now, there is probably more money in the voice base station, though that will change pretty quickly. There’s really a couple of different dynamics on the infrastructure side. You have, I think the latest figure is 1.2 or 1.5 billion wireline ports in the world, and, on the other side, we hit about 2 billion cellular subscribers coming into this year, and the forecast near 4 billion by 2010 (I believe we will reach that figure). But most of that growth comes from India, Africa; in places like the U.S. and Europe and Japan, there’s some growth, but it’s mostly upgrades and new features, not new subscribers.
But on the wireline side, you’re not seeing that explosive growth. If anything, you are actually starting to see a little bit of a decrease — while the developing world is adding phone lines, the developed world is actually reducing them. In the U.S. or Europe, the number of connected phone lines wired is actually going down.
All of a sudden you get a big order from a customer and start thinking, “This is great — it’s finally taking off.” Then you find out it’s equipment replacement for a hurricane. Then you see the next big uptake and find out the company has decided to upgrade one central office and see how it pushes them. So, it comes and goes on the wireline side, and what will happen eventually is the wireline side, in 20 years or so, doesn’t need it because, theoretically, everything on the client’s side is already packetized. You no longer have to worry about TDM in the core.
Now with how slow this all moves, you know, one can argue 20 years is a long time and some would argue it’s a very fast time. I used to work for GTE years ago. I would argue it’s a fast time knowing what I know how companies work in a central office. But there’s a balance to how much wire line replacement we’re going to do versus how much just eventually comes in through the media gateways.
On the other side, what we’re starting to see many companies offering IP phone technology for the hosted SMB market, and they are really starting to host, whether by getting their own little server or by using their provider of choice. We really see that market starting to grow now and, in two years, it can be a very large market if things continue this way. So, from our perspective, probably the hottest growth market in the voice world is the SMB market.
The residential IP phone still has some time, and the real reason is because you are fighting physics. The cordless phone that you have today has great battery life’s great, it works well, it’s cheap, and the range is very good. There are really no complaints and they really are dirt cheap. But, as soon as you put
WiFi (News - Alert)
in, you’re transmitting at much higher bit rates, and it really comes down to a choice between shorter range or battery life. Right now, in any wireless LAN phone, it’s both. Certainly, technology can improve that, there still is that part of physics that says energy per bit is what matters. When all of the dust settles, how much energy do you have per bit and if you are transmitting at 54 Mb in a wireless LAN versus something on the order of 100 kb or even lower? You really are going to fight an uphill battle for a long time.
There’s also the support issue: Who is going to help the user? I love my Mom dearly, but I’m her computer tech support desk. I would hate to have her on an IP phone. In fact, she called me and said she was switching to Vonage. I told her, “I’ll pay your phone bill.” I love Vonage by the way; they are a good partner of ours. But, it’s not for my Mom, because I don’t even know how she would call me if she had a problem. So we still have some quality issues and some kinks to work out of all of this to get where we’re going, and we’re really trying to drive packetized voice, on both wireless and wireline side, better than TDM.
And there’s no reason it can’t be. You have wider bandwidth; you’ve got a lot of horsepower to really enhance the voice and suppress the noise; you’re not restricted with the old 3 kHz pass band. You can do a lot of good things, but they take time because, unless everyone has the capability, you don’t necessarily see it. But we’re really trying to drive a better voice experience to the market through wideband codec, through better echo cancellation, better noise suppression, and better quality metrics along the entire call path.
You’ve probably seen me advertise this thing called PIQUA. It’s really a combination of centralized and decentralized quality monitoring that enables you to predict when you are going to have problems, take action in a decentralized or centralized way, and send notification that a problem exists. You really start seeing some benefit when you see these integrated devices, like a DSL modem with built-in voice. It’s not that hard for us to look and say, “The reason your voice call is having a problem is because someone is uploading a video file at the same time and you don’t have enough bandwidth.”
So, give voice priority. You could also put up a note if you are going to continue video uploads. Or you might want to think about a bigger pipe. There are a lot of things that can be done. There’s also a tremendous amount of information out there that is not utilized, and the real thing is to get it from people, like Texas Instruments, who can provide low-level system capability because we have the chip, we have the knowledge, and we have all of the software that resides there. So, you can use that information, and when a customer calls about poor call quality, the support person knows it was a bad experience and tells the customer it was the echo canceller, or there was too much jitter, or whatever, and that they are working on it. Or, they can say, “I have no idea. Do you want a refund?” or “Reset your data,” that’s another common answer.
RT: It seems like you are in a really good position to be involved in the voice quality business.
BG: We have to enable it, but unfortunately, just enabling it isn’t the only answer. It really has to be driven all the way through. For instance, I have a cell phone from a certain provider who advertises the fewest dropped calls and lately, it’s because I get the fewest placed calls.
Part of the problem is operators are upgrading their networks for 3G. They are upgrading to add data and they are recycling spectrums, so all of a sudden, in one area that used to work great, it doesn’t work, and it’s because they have taken away some kind of capacity to put in some other technology. We can do better.
I think cell phones have trained us to expect poor quality and I would really like to reverse that. So what we’re trying to do is make sure we have all of their information and we do work with the service providers. It’s a difficult path because, if we get too close to the service providers, our customers, who are between us and the service providers, get upset and I understand, because that’s their market.
But, if we don’t start getting everyone together, it won’t happen — there are too many proprietary systems and there are too many different ways to implement voice. We just have to make it the quality…kind of like cars. If you look at cars back in the 70’s, poor quality was expected and out came a couple of foreign manufacturers who made quality the #1 concern and all of a sudden, everyone figured out, “Well, if I don’t have quality, I’m not going to play.” So we really need to get to the point that it’s not a question of my implementation verses yours. Everyone has to have quality and for this market to really go forward, you’ve got to get to the point that the user, all they know is better.
RT: What do you think about the turn of events with
Intel (News - Alert)
/Dialogic and HMP versus DSP development?
BG: The DSP, historically, and still today, has a tremendous advantage when you are doing complex mathematical algorithms, but it loses some of that edge when you are just doing control code. Where we see the advantage, and have always seen the advantage, is power. We measure everything in terms of the three domains in infrastructure: the power per unit, the cost per unit, and the space per unit. The DSP, hands down, wins the space war and the power war.
We typically have slower speed processors, 1 to 1.5 GHz today, with someone like Intel being able to go up to 4 GHz. They have faster clock speeds, in general, but from my 1 GHz part, I’m using 5 watts of power, with memory and peripherals up and running. Your typical Intel processor or RISC processor is at least 10 watts, if not as much as 100 watts. So, they tend to burn a lot more power and, though they tend to be faster, they’re not that much faster. But, if you are doing an echo cancellation or using a low bit rate codec or tone detection, the DSP, even at the slower speed, can get more done.
Now, on the flip side, if all I’m doing is control code — high level C source code that is just shuffling buffers — the DSPs historically have been disadvantaged. In the old days, we didn’t have external memory or, if we did, it was SRAM. The newer devices have DDR2 and all of the wonderful peripherals, but even so, I would argue that the RISCs are probably ahead of us on memory management. They have been doing it for a lot longer and so control code tends to be a bit of a negative.
On the infrastructure side, the DSPs and the DSP blades are very, very targeted at the high computational issues. We’re even seeing them start to creep into some of what used to be the domain of control code, like in 2G and 3G. We’re starting to see them show up in encryption and decryption because they do it very effectively, and even to do some of the packet switching, because they can do that quite well. That said, we still see RISC, whether it’s the network processor or generic processor, as the brains of the outfit… the one that’s running the SIP stack, the one that handles provisioning. DSPs don’t do well there.
In the client boxes, TI has long been a proponent of a two-core solution — one that’s a RISC and one that’s a DSP — offering the greatest benefit, because it gives you the ability to be able to handle some of the most complex things out there. But what we’re seeing is the RISC people are now putting in DSP extensions and we’re putting in much better memory management, and we’re both starting to move closer to the center line.
We’ve been doing multi-core products since 1998. Intel is now doing multi core, because going from 4 GHz to 8 GHz, while possible, was going to put holes in boards. So the challenge there is how do you utilize multiple cores effectively? Do two cores at 2 GHz equal one core at 4 GHz? The answer can be surprising. Sometimes it equals more. Sometimes it equals a lot less. It just depends on how you are utilizing it and how you are feeding it. And those are really the tricks of the trade.
But the real reason for all of this is why we’ve kind of hit the wall is if you look at silicon technology, most vendors are shipping what’s called the 130 nanometer or 190 nanometer products — some of the vendors, like Intel and TI, are shipping 65 nanometer products in fairly high production.
From the early ‘90s until probably about 2002, every time we had a geometry reduction, we took the core voltage down. The core voltage went from 5V to 3.5V to 2.5V to 1.8V, all the way down to 1V now — and power is a function of voltage squared. So taking the core voltage down usually meant you can go faster at less power. Unfortunately, what we’ve hit, and it is pretty much industry wide, is a point where one volt is kind of the floor for running high-speed SRAM and, because of that, we’re not able to take the core voltage down as we go down in geometry. In fact, what we’re finding is, depending on the design, a 65 or 45 nanometer chip can actually use more power than it’s predecessor for the same function, and that’s not where we want to be.
So that’s why you are starting to see a lot more of these multi-core devices because typically somewhere in the 1 GHz to 3 GHz range the power goes non-linear, meaning that for every MHz you are consuming more power than the previous MHz. So one of the reasons you are starting to see the speed back down a bit at the very high end is that it’s actually a better power performance. So it’s getting a little tougher. It used to be a really easy roadmap just to go to the next process node… I’ll give you more speed, less power, cheaper price. I hit all three metrics and life is good. Unfortunately, it’s not as easy today.
RT: In a multi-core environment, do developers take the brunt of this technology lead? Or are we at a point where compilers and other tools are able to deal with the multi-core?
BG: That’s a really good question, and it really depends a lot on the particular implementation. In the DSP world, we tend to do it one of two ways: If it’s four cores, it looks like four devices, and there are pros and cons to that, but we’re giving you four devices and the space of one chip, with maybe half the power, but your code is 100% portable with no changes required. So life is good. But you know, you really can’t bond them together.
The alternative is, say, with four or six cores, but we put a big chunk of shared memory that rides all those cores. In a VoIP application, it works very well, where every core is running the same program code and you put a megabyte of onboard chip memory and load your program code, so all of the cores utilize it and it looks like it’s four times or six times the memory. That does, however, require some work by the end user if they are writing their own software.
If you use the software we provide, we take care of all of it. What we try to do is the base offering, so we will provide you with every voice codec possible. We’ll provide you best in class echo cancellation, tone detection, jitter buffer management… all the pieces of a VoIP channel. We’ll provide all of that to you, and now what you really need to do is differentiate what’s above.
IMS comes out and all of the different media management comes out, and there’s a lot more to it than just having a good voice channel. We’re trying to move the traditional VoIP providers up the food chain and have them look at how TI can enable a better system for them.
There is no question that multicore, if you start with programming, is tougher, but if you already have the applications on it that are aware of the multicore architecture, it’s pretty easy. We have the tools so that you wouldn’t have to worry about it. What we do to reduce cost and power is leverage the common memory, so you’ll have one piece of memory outside the device and both cores acts as the same memory and that’s all part of our technology to enable that, without colliding, overriding, stalling, so you can get the performance you need.
RT: Going forward, where do you see the communications market heading in the next five years?
BG: That’s the magic question. Unfortunately, it’s going to have fits and spurts and stops because of various economic issues or government interventions. I think the big thing now is going to be your phone number ringing wherever you are. So whether it’s your office phone, your cell phone, your home phone, your Vonage phone, whatever device is accessible… it’s one phone, right?
The other thing is getting to a much better quality with wireless, wireline, everything combined, so you really have an FM radio quality of experience — or let’s go with the times, an XM radio quality experience. So on the voice side, that’s really what I see. You get a single phone and it doesn’t matter who calls you where. You may have five phone numbers. It doesn’t matter. It all rings on one device. Hopefully, you don’t need five phone numbers any more. You have one number that works universally, but you might want your business phone to be separate. But you still want it to ring in one place. To me, everything is in place to do that. Now, it’s about connecting all the pieces.
The other thing is in the network infrastructure, and I don’t like this as much because it means fewer DSPs, but you really have to get all of these networks connected. There are very, very few calls that don’t go back to TDM at some point. Even if you’re packet on both ends, you probably go packet to TDM back to packet. There is degradation associated with that, whether it’s with transcoding or delay or whatever else, but there is a problem.
RT: There’s also a whole transcoding market; it looks like you’ll be the engine supplying that market.
BG: Financially it’s good for us, but, at the end of the day, it has to happen. We’ve got to get to the point where there are a couple of codecs — a wideband codec and a narrowband codec. We can’t continue with 2G having one set, 3G having another set, TDF made in China with another set, and so on. If we continue down that path, we will have to do transcoding and, ultimately, that results in a quality hit. It’s also a cost hit.
So, I think over five years, we’ll make progress on this. We won’t resolve it. I think you will see the 3G networks become interconnected and maybe the major networks, like maybe a
Comcast (News - Alert)
or Vonage or something like that, a few of those get interconnected with 3G, depending on their partnerships. But, unfortunately, though the technology does exist, it now comes down to business needs and government regulation… and I never underestimate how slowly that can move.
RT: Are there any other comments that you have about TI’s vision?
BG: Obviously, we have a huge video push also, though it costs a little bit more to do two-way video. With IMS, if you’re going to have video coming out — and there are already some cell phones that can receive video — the real question is how do you deliver video to the end user. There are three or four ways to do it.
There are lots of different capabilities and it’s going to take time to figure it all out, because each service provider has their own reason for doing what they do. So it will take some time to really sort this out, but video — both informational as well as an extension of your media at home — will become more and more common.
Two-way video is probably longer out. It’s not that we can’t do it, but I gave my kids and other relatives the capability to video conference with me three years ago, and I can probably count on one hand the number of times we’ve actually done it. It is definitely going to take a while for video — being able to look at each other while you are talking — to become a great interest to a lot of people.
RT: The ability to capture video on a cell phone and email it has changed the way people communicate with their audiences and I can envision that ability being coupled with YouTube and other types of online media.
BG: It’s definitely coming. My oldest is in Junior High School and for his chemistry projects, they come to my house — about five of them — they do a whole video production, and then they stick it on the computer, they upload it, and then they play it in school. It’s definitely different, but it’s going to take some time for point-to-point or even point-to-multipoint interactive video to really grow. The shifting of video — content shifting, place shifting, time shifting — have all been proven, but with some of these abilities, everyone gets worried about the content, and that’s kind of a drag. But I do think video has a big place in the infrastructure.
I’m just not sure how much of it is going to be person-to-person in the near term versus longer term. Near term, there’s certainly enough video out there that you want to take and move, so that’s certainly going to drive infrastructure products, in terms of data, some infrastructure products in terms of transcoding/transceiving, and then it’s certainly going to drive a lot of radio equipment as the capacity needs go up and up. So I view those as all really good things.
Rich Tehrani is President and Editor in Chief at TMC.
» Return to Executive Suite Home
|
|