News Center
News Center > Industry press
[ Print ]  [ Back ]
Development and Prospect for Fiber Communications Technologies
( Update Time : 2010-9-2 20:34:54 )
Abstract: This article provides a brief summary and prospect for the development trend of key technical fields in the fiber communications system. The main conclusions are as follows. SDH will be transformed into a converged, low-cost multi-service platform. Some new optical Ethernet solutions are meeting the functionality and performance requirements imposed by public telecom networks. These solutions provide choices for metropolitan multi-service networks, but there are a series of issues that still need further clarification and improvement. 40 Gbit/s transmission technology is nearing the final stages of its development, but it will be awhile before large-scale commercial application is realized. The technology and market for ultra-long-haul WDM is excellent; and coarse WDM prospects will be good in metropolitan area networks in China. Point-to-point WDM transmission will evolve or be transformed into an automatic switched optical network. EPON and GPON will become the leading FTTH technologies.

However, the large-scale deployment of all the above technologies still needs further consideration in terms of the cost, mapping technologies and types of applications.

At the beginning of this century, a huge burst occurred in the network, fiber, and 3G bubbles that threw the telecom industry worldwide into a tight corner, and fiber communications was the first area to be affected. Fortunately, inherent demands for telecommunications are still very much in evidence. People still make calls and access the internet; short message is becoming more and more popular; and IPTV is simmering and almost ready to be served up in the telecommunications market. In fact, the telecom market keeps growing and growing. For instance, the demand on network bandwidth worldwide grows at a rate of 50% to 100% annually, and backbone service and   bandwidth demand in China has grown at a rate of nearly 200% in previous years. The difficulty brought about by the bubble burst has only made the speed of development slower, but it has never halted the development of telecommunications technologies and service. After years of realignment, the telecom industry is now back-on-track and the market is appearing to be more stable and normal. This article  only gives a brief summary of the development trend and prospects for fiber communications technologies.

SDH is transforming to the converged, low-cost multi-service platform of next generation

SDH is considered by many to be the mainstream transmission hierarchy in telecom networks. However, due to the emergence and continued development of WDM, SDH has changed much in terms of both its position and application. For example, on the long-haul backbone, SDH has been reduced in importance and position, to being the client layer of WDM and its application in this instance is moving it to the network edge. Because client signals at the network edge are complicated, SDH must transform itself, from being solely a single transport network, to becoming a multi-service platform, which is capable of integrating both the transport network and the service network. That is to say, each SDH node needs to become a converged multi-service node. Thus, once this has been accomplished then we will be able to make full use of a more trustworthy and mature SDH technology, especially concerning such aspects as; its service protection and guaranteed delay performance. Other advantages to be gained would also include; its newly formed ability to adapt to multi-service applications and support Layer 2, or even Layer 3 intelligent data features, hence, building up a multi-service transport platform (MSTP) that integrates both the transport layer and the service layer.

In recent years, the percentage of data services in the network has been increasing exponentially. So much so, that now the SDH multi-service platform is evolving from a data-only encapsulation and transparent transmission mode, to a next-generation SDH, and is continuing in its development to support data services in a more flexible and efficient manner. The latest advancement in telecom technologies would include: the integration of a generic frame procedure (GFP), a link capacity adjustment scheme (LCAS) and automatic switched optical network (ASON).

GFP is a mapping technology that can transparently encapsulate various data signals into a universal standard signal structure used in current networks. GFP is a simple and flexible technology that not only lowers overhead cost, but produces higher efficiency. Other attractive features for why one would want to use this technology would include; its good performance of inter-operability between vendors, statistical multiplexing of user data, and QoS mechanism. In addition, because each process of random byte blocks is simplified, GFP imposes less stringent requirements on the mapping and demapping process of data links. By using the low BER feature of modern communications, GFP also provides improvement of the following areas: receiver complexity, equipment size, and costs. This makes it a technology that is perfectly suited to high-speed transmission links. For example, this would include such links as: point-to-point SDH links, wavelength channels, and dark fiber applications in OTN.

LCAS defines a way to execute a hitless adjustment of the bandwidth of the virtual concatenated payload in the transport network, thus allowing the service bandwidth to be adjusted to specifically suit service requirements. Transport of the signaling is carried out by way of SDH NEs and the network management system. With LCAS, effective payload can automatically map into usable VCs. This means a continuous adjustment of bandwidth, which speeds up bandwidth provisioning, while at the same time having no deleterious effect on services. In addition, when failures occur on the system, the system bandwidth can be dynamically adjusted, without the need of manual operations. Hence, this feature increases network utilization significantly, while also offering guaranteed QoS.

ASON can set up and manage connections dynamically, thus achieving the functionality of automatic routing and provisioning. If the next-generation SDH MSTP is able to integrate standard functions like, VC concatenation, GFP, LCAS and ASON, then with the help of the automatic routing and provisioning functionality of Core ASON, such a transport platform could greatly enhance its support to data services in a flexible and efficient manner. In addition, the intelligent features of Core ASON are expanded to the network edge, with the outgrowth resulting in enlarged intelligent coverage and higher efficiency of the network.

Because MAN is facing stiff competition from optical Ethernet, the MSTP has been forced to lower equipment costs and offer more flexible services. One of the most recent and remarkable trends is to combine the MPLS, so that the MSTP and MPLS are mutually dependent, as they expand to the network edge together. Once this occurs, then such MPLS features as, flexible cross-domain data networking, can be fully utilized.

Challenge and new development of optical Ethernet

Optical Ethernet, which originated from LAN, represents a new type of Ethernet technology that runs along the surface of a fiber line. In terms of structure, Ethernet is a type of end-to-end solution that processes Layer 2 switching, traffic engineering and service provisioning at each part of the network, thus eliminating the need for format conversion at the network edge. In terms of scalability, Ethernet is able to provide incremental bandwidth on demand, with a granularity of 1 Mbit/s, by changing the flow policy parameters at the network edge, thus expanding the capacity to 10M bit/s, 100M bit/s, 1G bit/s, or even higher. In terms of management, Ethernet network management is simplified, largely because the same system can be applied to each layer of the network, thus quickening the pace of new services into the market.

In summary, the Ethernet multi-service platform is especially suited for network applications that mainly deal with IP/Ethernet traffic. Such a platform can also be deployed as an independent IP MAN in small- to medium-sized cities, which have heavy IP/Ethernet traffic, or as the access and convergence layers of the IP MAN in medium- to large-sized cities, which also have heavy IP/Ethernet traffic. The core layer in the above mentioned applications also deploys high-end routers. To date, some enhanced new optical Ethernet solutions are being applied to a few metropolitan multi-service platforms.

Because Ethernet originated from LAN, therefore, QoS is not an issue for LAN applications. However, when Ethernet expands to public telecom networks, then differential QoS and SLA mechanisms will become necessary. Up till now, legacy Ethernet still lacks a reliable mechanism that would prevent jitter and delay performance in end-to-end applications, and thus it fails to provide standard QoS provisioning network-wide, for real-time services. It also does not support charge statistics that are required for nodes, or networks shared by multiple users. Second, Ethernet is specifically designed for internal use by LAN users, thus, a security mechanism is not available. Once Ethernet reaches out to MAN and WAN, a new and more reliable security mechanism will be required. Third, the OAM&P of Ethernet is weak. This is the case, largely because the public telecom network has to run and maintain a large number of geographically separated networks. Hence, strong OAM&P ability, network-level management and vision, as well as profitable business modes are required. Fourth, optical interfaces of Legacy Ethernet switches are point-to-point connected, and thus transmission equipment is saved, but the built-in powerful fault location capability and complete performance monitoring capability, are unavailable. As a result, Diagnosis and troubleshooting of faults occurring on the Ethernet is difficult, especially in complex and large networks. Legacy Ethernet relies on spanning tree protocol (STP), as well as rapid spanning tree protocol (RSTP), for protection purposes. Such protection mechanisms require several seconds, at least, for convergence. In addition, it is difficult to transport carrier-level voice and data services. And finally, the costs of the fiber line increases significantly as network size increases, along with the number of nodes. However, such network costs are evidently worthwhile, for if not, then many big carrier-level networks would still be unknown. In a word, only after the problems mentioned above have been resolved, can Ethernet be considered as a true multi-service platform and used in large public telecom networks to provide carrier-level services.

In recent years, optical Ethernet has undergone considerable development. Therefore, some of its most recent technical solutions have been able to resolve most of above mentioned problems, or at least, part of them. For instance, Legacy Ethernet technologies have been immensely improved. As a result, Ethernet is now able to offer multiple services, enjoy some QoS and network management capabilities, along with having higher survivability. Some Ethernet technologies already have the ability to provide 50ms-level quick protection switching, have some use of a digital wrapper, forward error correction (FEC), as well as synchronization technologies, thus allowing them to improve overall system performance and span the transmission distance. That is to say, some new optical Ethernet technologies now possess the functions and performance required by public telecom networks. Except for extended and enhanced technologies such as; Q in Q, and MAC in MAC, which are well-known technologies of legacy Ethernet, various other standardization organizations have also developed many new optical Ethernet technologies. These would include quality improvement & assurance standards such as; a resilient packet ring (RPR), a multi-service ring (MSR), virtual private line service (VPLS), optical Ethernet (OE), and V-Switch. Moreover, each new technology has unique features. In the following discussion RPR and VPLS are introduced, which perhaps are the most typical optical Ethernet technologies.

As an enhanced Ethernet Layer 2 protocol, RPR can work on SDH, gigabit Ethernet, or on naked fibers. Currently, the mainstream application is to apply it over SDH, thus it becomes a built-in intelligent layer of the new-generation MSTP.

RPR transmits IP packets through the new MAC layer into a layer of data frame, or into naked fibers, thus eliminating the need of reassembly and assembly for the pass-through IP packet. These pass-through IP packets are directly forwarded, and thus, the process is simplified and the switching capability is enhanced, improving the performance and flexibility. Second, RPR features relatively high bandwidth utilization for data services. Moreover, statistical multiplexing and load balance mechanisms are available between the LAN ports of each node. Plus, the layer 2 protection mechanism is adopted, thus making it unnecessary to reserve bandwidth for protection purposes. In addition, MAC protocol that features destination strip is used and bandwidth can be reused. Most assuredly, with all these enhancements, bandwidth utilization of RPR in the mesh service mode and hubbed service mode, considerably exceeds that of the legacy SDH. Even when compared with the next-generation SDH that deploys virtual concatenation, RPR still enjoys a number of advantages like, finer bandwidth granularity, and statistical multiplexing. Third, RPR can ensure the quality of circuit switched services and private line services, as well as provide 50ms-level quick protection switching. Fourth, RPR has automatic topology discovery and enhanced self-healing capabilities and also supports plug-and-play. And finally, RPR can not only effectively support two-fiber bidirectional ring topology, but it can also support dynamic statistical multiplexing for services in both directions.

Considering its excellent convergence features and enhanced data access capabilities, RPR is well suited to access the layer applications of MAN. However, RPR requires the addition of a MAC layer, thus increasing system complexity and costs accordingly. Second, due to the lack of a cross-ring standard for RPR, RPR information cannot cross rings, thus making it difficult for RPR to support complex networking topologies.

However, the combination of RPR and MPLS can configure the cross-ring traffic flow as the same label switched path (LSP), thus enabling the interconnecting and end-to-end provisioning of services among multiple RPR rings. Also, by applying MPLS, traffic engineering network-wide is available, thus supporting space reuse, and providing guaranteed bandwidth and end-to-end service connections, with guaranteed QoS. This greatly enhances the flexible networking of data services, while at the same time, expanding the application to complex topologies such as, the mesh network.

VPLS is a multi-point interconnected Layer 2 VPN technology that is developed on the basis of a point-to-point MPLS. For the user, all the nodes look like they are connected to a dedicated LAN, while for the service provider, IP/MPLS infrastructure can be reused to offer various services. This technology is based on MPLS and dependent of physical topologies, as well as achieves optimized allocation of resources through the traffic engineering of MPLS. VPLS adopts RPR, instead of STP and RSTP of Ethernet, to offer 50ms-level quick protection switching. VPLS also supports an extensible access control list (ACL) at Layers 2/3/4, and the ACL control on a user basis, thus providing more reliable control and strategy mechanisms. VPLS features good Layer 2 convergence capability and the quantity of its users exceeds the 4,096 restriction of the VLAN IDs of legacy Ethernet. VPLS provides layered VPLS (H-VPLS) and improves scalability. VPLS can  distinguish and guarantee the different traffic volumes of users, simplify network service configuration, and speed up service provisioning. VPLS can easily identify the borderline between the service provider and the customer premise network, for convenience of management.
Of course, all the above mentioned features are not free for use: added complexity and development costs are figured into the overall cost to users, which are partly offset by the low-cost advantage of having Ethernet.

It is expected that with the continued increase of IP/Ethernet traffic on the network and emergence of Ethernet-based new technologies, that the optical Ethernet multi-service platform will become more and more popular in MANs.

The development, challenge and application of 40Gibt/s system

Thus far, networks have been equipped with the mass 10Gbit/s system and many telecommunications companies have started to conduct field experiments, using the 40Gbit/s system. In network application, routers with 10Gbit/s interfaces have been largely applied, while those with 40Gbit/s interfaces are just starting to be employed. In order to improve the efficiency and function of the core network, the most reasonable approach would be to develop its single wavelength rate, so that it is transformed into a 40Gbit/s transmission.

Generally speaking, the main advantages to adopting a 40Gbit/s transmission are as follows. Making use of the transmission frequency band more efficiently, so as to achieve higher spectral efficiency. Realizing mass commercial use and reduce the cost for transmission, when the cost of the 40Gbit/s system is cut down to less than 2.5 times that of the 10Gbit/s system, then practical application and mass commercial use will be reasonable. Reducing the cost and complexity of OAM and the quantity of required spare parts, since four NEs are replaced by only one NE. Improving the efficiency and function of the core network.

However, the transmission rate of a single wavelength can be restricted by the migration ratio of IC materialˇs electrons and holes, transmission media dispersion, polarization mode dispersion (PMD), and the cost performance of an application system. At present, IC material is no longer a key restriction, but the other three factors have created a bottleneck for the utility of such a transmission rate.

As far as its practical application, the 40Gbit/s transmission system also faces some challenges. The eternal modulator must be adopted. The drive integrated circuit that can output sufficient voltage to drive the eternal modulator is not advanced enough. NRZ modulation has been used for years, but it is hard to tell whether or not it can work efficiently and reliably with the 40Gbit/s system. At least, the long-haul transmission is difficult in this system, so it is inevitable to turn to better RZ code or other modulation methods with even higher efficiency, such as CS-RZ code, DPSK-RZ code, CRZ code, super CRZ code, D-RZ code, pseudo-linear RZ code, soliton modulation and so on.

Besides technical factors, the economy is also a crucial factor that must be taken into consideration. From past experience, we know that the 40Gbit/s system can achieve mass application, only when the cost is 2.5 times less that of the 10Gbit/s system to run. Thus, in theory, the ideal application for the 40Gbit/s system is still in a long-distance network, simply because it requires the largest capacity and lowest bit transmission cost. In Chinaˇs backbone networks, however, fiber utilization is less than 30% and SDH circuit utilization is less than 50%, although channel utilization has exceeded 70%, primarily due to large-scale construction in the past few years. Therefore, only expansion in WDM application is needed, for the general capacity of the optical cable network reveals a surplus, and so there is no immediate need to upgrade the whole network to 40Gbit/s. Another key factor is the polarization mode dispersion (PMD) of optical cable. It is said that the PMD attribute of the cable network in China, except through a few routes, is still able to get by with using the current 10Gbit/s transmission system. However, when the rate goes up to 40Gbit/s, the transmission distance that is limited by PMD, will be reduced in inverse proportion to the square of the transmission rate. This means that the transmission distance will be reduced to one sixteenth of a 10Gbit/s system, thus the second order PMD will have greater impact. Whether or not the PMD attribute is compatible with the long-distance transmission in the 40Gbit/s system still needs to be verified, by carrying out a large-scale field test.

However, as for short-distance transmission, since the dispersion compensation, optical amplifier and external modulator are not needed, then the 40Gbit/s system is able to provide the lowest cost per bit. Hence, the problems mentioned above no longer present any obstacles. First, It is recommended to start the 40Gbit/s system with the application of short-distance interconnection, which includes the interconnection between routers, switches and transmission equipment in the local exchange, and then it can be extended to apply to the MAN-range, and finally, to relatively long, long-distance application.

The development of the ULH WDM System

In recent years, along with the many technical breakthroughs that have been occurring, as well as with the economic boost that the market has been experiencing, has also been the rapid development of the WDM system. Up till now, the 1.6Tbit/s WDM system has been in large-scale commercial use. In order to decrease the number of electrical regeneration points, cut down initial costs and operation costs, improve system reliability and cope with increasingly longer ground termination distance of IP service, then the all-optical transmission distance of the WDM system will also have to be greatly extended, from the current 600 km, to over 2000 km. In such a case, then the WDM systemˇs main enabling technologies would need to include: distributed Raman amplifier, FEC (Forward Error Correction), dispersion management technology, strict optical equalization technology, and a high-efficiency modulation format.

Generally speaking, the main advantages of laying the ULH (ultra-long haul) WDM system are as follows. It can reduce the system cost and signal delay, simplify the provisioning of a high speed circuit, and speed up the service supply, with considerable decrease of electrical regenerative repeaters. Guarantee highest bandwidth efficiency, with the realization of traffic grooming on the edge of the core network. Reduce operation and maintenance costs of the network, with considerable decrease of electrical regenerative repeaters.  Reduce network upgrade costs and simplify the networking structure, with a further improvement in the transparency of networks. Thus, the evolvement into becoming a mesh network is also made easier.

At present, ULH technology is completely developed and it has some practical network applications. However, only a few countries or regions are in need of such a ULH circuit for long-distance transmission, so its application scale is limited. As a result, equipment costs are higher, largely, for a lack of being able to take advantage of mass production, which also means that the total cost of running the network is not as cheap as expected. Hence, to some extent, the cost also affects the application of ULH technology.

The development of the Metropolitan CWDM technology

Along with the development of technology and service, WDM technology is also expanding, from long-distance transmission, to the metropolitan area network (MAN). Since MAN transmission distance is usually shorter than 100 km, the external modulator and optical amplifier that must be used in the long-distance network may not be adopted here. The increase and expansion of the wavelength number will no longer be restricted by the frequency band of the optical amplifier. Moreover, it is acceptable to use the optical generator, multiplexer, demultiplexer, and other components, with comparatively broader wavelength interval and lower demands for wavelength precision and stability. Thus, the cost of components, as well as the whole system will be greatly reduced, for all the above reasons.

Although the cost of the metropolitan WDM system is markedly lower, than that for the long-distance WDM network, however, its total cost is still relatively high when compared with similar technologies that are presently available. In particular, when the transmission distance is so long that an optical amplifier is required, then it becomes necessary to develop a low cost optical amplifier. Currently, few customers and applications apply the whole wavelength bandwidth on the network edge. Furthermore, the WDM multi-service platform is mainly applicable to the core layer, especially to those applications which have large expansion demand and long transmission distance.

The concept of coarse wavelength division multiplexing (CWDM) has emerged in response to demands to further reduce the cost of operating the metropolitan WDM multi-service platform. This kind of system has three typical wavelength combinations: 4, 8 and 16 wavelengths, with up to 20 nm wavelength channel intervals, and 刡6.5nm allowance of wavelength drift. Such features help to lower laser specification requirements, and thus, reduce laser costs. In addition, the CWDM system has a very low requirement in regards to laser wavelength precision, hence, no cooler or wavelength locker is needed. In this instance, laser features not only help to low power consumption, but its size will also be very small. Furthermore, because such a laser can be packaged with a simple coaxial structure, its packaging cost is lower than traditional dish-shaped packaging. In total, the laser module cost can be reduced by as much as two-thirds.  As for the filter, the typical dielectric thin film filter, with 100 GHz intervals, needs 150 layers of film plating, while the CWDM filter, with 20 nm intervals, only needs 50 layers. Therefore, the finished product rate of the CWDM filter can be greatly improved, with an estimated cost cut, of at least 50%.

In a word, the CWDM system has much lower requirements than the DWDM system in regards to laser output power, temperature susceptibility, dispersion tolerance, and even laser packaging. Also, with low requirement on the filter, the system cost is expected to drop considerably. In particular, the 8-wavelength CWDM system has an optical spectrum, avoiding the OH absorption peak, which is near 1385 nm, and is applicable to any kind of optical fiber, so it will most likely be put into practice first.

In regards to service application, the CWDM transceiver has been applied in conjunction with the GBIC (gigabit interface converter) and SFP (small form-factor pluggable), and can be directly inserted in the gigabit Ethernet switch and fiber channel switch. The CWDM transceiver offers smaller volume, lower power consumption and lower cost, than the related DWDM device. Obviously, when one considers such factors as service demands and lower cost, then it would appear that CWDM has a very bright future in Chinaˇs MAN development.

From point-to-point WDM transmission to ASON

Although the ordinary point-to-point WDM communication system provides huge transmission capacity, it offers only primitive transmission bandwidth. Flexible nodes are needed to realize highly efficient and flexible networking. However, due to the complexity of the current electrical DXC (digital cross connect) system, its node capacity cannot keep pace with the increase of network transmission link capacity. As a result, further expansion relies on optical nodes, that is OADM (optical add/drop multiplexer) and OXC (optic cross connect).

OXC falls into two categories regarding technological implementation: one is implemented with an electrical cross matrix, which is sometimes abbreviated as, OEO, or electrical OXC; the other is implemented with a pure optical cross matrix and is sometimes abbreviated as, OOO, or all-optical OXC.

The former (electrical OXC) can easily make signal quality control clear and eliminate transmission impairment. Electrical OXC has an advanced network management system, and its features come at a low cost, if providing relatively small capacity, and it is also compatible with current connection technology. Above all, electrical OXC can process and allocate bandwidths that are smaller than a whole wavelength, and therefore, is able to meet current market capacity requirements. However, electrical OXC expansion still relies on the continuous improvement of semiconductor chip density and performance, which has not been able to keep pace with the increase of network transmission link capacity.

In regards to the latter (all-optical OXC), first, it does not need the O/E conversion process, thus a large number of O/E conversion interfaces are saved. Also, thanks to the removal of the bandwidth bottleneck, all-optical OXC capacity is expected to expand significantly in the near future. Hence, the transparency that also comes with having a huge capacity, will be able to support a variety of client-layer signals, and its low power consumption promises to have a long technical life. However, the bandwidth of all-optical OXC equipment must equal at least a whole wavelength, so that such bandwidth can be operable, which when viewed at economically, needs much more consideration. Second, the existing connection system may need to be reconstructed in order to accommodate all-optical switch application. Third, it is difficult to achieve performance monitoring in the optical area. Fourth, the line connecting all-optical switches is composed of a series of equalized optical amplifiers, which makes it difficult to perform quick and dynamic wavelength routing in a finely equalized mesh network. And finally, all-optical network coverage is limited, due to the non-linear impairment of dispersion. Thus, when one considers all these factors, especially an inadequate demand for capacity to be flexibly allocated by the network, then this tends to hamper the development of an all-optical OXC. Notwithstanding, rare applications of an all-optical OXC have been seen in a few places around the globe. However, it is not too difficult to believe, that with the continuing development of network capacity and increasing demand for better-quality network services, that the application of the all-optical OXC will experience more worldwide acceptance in several years.

With the continuous convergence of network service, to dynamic IP service, also comes the requirement for a more flexible and dynamic optical network. The latest trend is to introduce ASON (automatic switched optical network), so as to replace static optical interworking, with dynamic optical interworking. The advantages of such a replacement are as follows. Network resources can be dynamically allocated to routes, and the expansion time in the service layer is shortened. Services can be provided and extended very quickly. Both initial construction cost and operation cost of the network are lowered. The optical layer is capable of restoring services very quickly. The demand for supporting system software is lessened. The chance of human errors is also lessened. New wavelength services can be introduced, such as bandwidth on demand, wavelength wholesale, wavelength leasing, leveled bandwidth service, dynamic wavelength allocation and leasing, dynamic route allocation, and OVPN (optical virtual private network).

Having been attracted by all the above mentioned advantages, such world-class telecom operators as, AT&T, BT and NTT, have now successfully introduced OXC and ASON into their networks. As a result, AT&T has now been able to simplify its networking structure, which has not only resulted in greater efficiency of bandwidth utilization, but has also helped to lower its initial cost by 50%, and its operation cost by as much as 60% (through simplified planning, provisioning and maintenance).

As network signaling and routing are involved in ASON, it is very important to have a unified standard. Theoretically, there is little conflict between the three standardization organizations (ITU, IETF and OIF) that are currently operating, each within its own specific field. But in point-of-fact, conflicts do sometimes occur in regards to detailed technical issues and specific choices, due to such factors as, technical, cultural and political diversity. Thus, a greater degree of time and effort will be required before better coordination between them improves. From the standpoint of transmission, the OEO hardware exchange platform is already at an advanced enough level to be put into commercial use, while the OOO hardware exchange platform has yet to be practically tested for its reliability; principally, because it has a strict bandwidth requirement and lacks capacity demand. Nevertheless, from the standpoint of control, its standards are almost at an advanced enough stage that it will probably not be long before, it too, will be put in to practical use. When one considers all the different technologies mentioned above, along with each being at a different stage of development, in regards to the standards process, then UNI1.0 should also be included among them. Moreover, UNI1.0 has now reached such an advanced state of development, that its interoperability is being recognized by many manufacturers. As for UNI2.0, its common features and RSVP signaling section are expected to be finished close to the beginning of 2006. E-NNI2.0 is presently not mature enough, and the estimated time of completion is projected to be sometime in the second half of 2006. From the standpoint of management, the ASON network management function is weakened, due to standards control. Therefore, some functions have now been transferred to the standards control process. In the long run, this will benefit different manufacturers in regards to interconnectivity, but it is not believed that this will pose a major restriction on ASON application.

Over the past decade in China, the development of optical communications has always been spurred by the expansion of point-to-point link capacity. However, in recent years, highly dynamic IP service and private line service have been experiencing rapid, steady development, and the network has been providing a relatively huge capacity for users. In addition, competition has also become fiercer, so the development from a transmission network, to the dynamic ASON network, has now been put on the forthcoming agenda. To construct a highly flexible, dynamic and reliable transmission network, which offers a huge capacity, is a key step in the transformation of Chinaˇs transmission network, and also is considered to be a significant project in Chinaˇs network development.

Development and prospect of FTTH technologies

There are two options in making FTTH technologies become a reality: a point-to-point active Ethernet and a point-to-multipoint passive optical network.

The advantages of having an active Ethernet include: The bandwidth is guaranteed through private access. The equipment is simple and cheap. The transmission distance is long. The cost is suitable, depending on the number of users (direct proportion). There is low investment risk and a high efficiency of port utilization, so the cost is low in areas of low user density.

The disadvantages of having an active Ethernet include: Users cannot share end equipment and optical fibers, because they are for private use. Therefore, active Ethernet is not suitable in areas of high user density. Active Ethernet requires a multipoint power supply and standby power source, which makes power supplying and network management very complex. There is no unified standard for active Ethernet, so compatibility issues arise.

The advantages of a passive optical network include: First, passive optical network is a pure media network, which avoids impacts caused by electromagnetic interference or lightning. Fault rate is lessened, bandwidth bottleneck is removed, reliability is improved, and maintenance cost is cut. Second, passive optical network has good transparency and wide bandwidth. It is applicable to signals of any format and any bit rate. It also economically supports triple-play. Third, users share end-equipment and optical fibers, so the cost is relatively low, with shorter distance of optical fiber, and less transmitting and receiving equipment. The marginal cost per user drops sharply, so a passive optical network is suitable in areas of low user density (as opposed to each area, where there are sub-areas with relatively high user density). Last, passive optical network conforms to the highest standards.

In a passive optical network, the technology that should be used for layer 2 has not been determined yet. In recent years, the ATM-based APON was believed to be a good solution, but its use finally faded out, due to its high cost, limited service capacity, low bit rate, low efficiency and the decline of ATM technology itself. However, more recently, with the emergence and development of IP technology, the concept of EPON has been brought forward. With a structure similar to APON, and based on G.983, EPON keeps APONˇs physical layer (PON), while replacing the data link layer protocol (ATM), with Ethernet. Thus, EPON can provide services with broader bandwidth, lower cost, and a wider service range.

The basic features of EPON technology include: ATM and SDH layers are removed, so that initial cost and operation cost are both lowered. A large quantity of advanced Ethernet chips can be easily applied, while the cost remains very low. It provides several security mechanisms such as, VLAN, CUG (closed user group), and VPN.

In 2001, when the IEEE was devising EPON standards, FSAN was also starting to devise GPON (Gigabit Passive Optical Network) standards. Soon after, ITU-T also participated in the development of GPON, and passed two GPON standards in 2003, the G.984.1 and G.984.2., respectively.

When compared with EPON, GPON offers a few advantages. First, according to the latest standards, GPON can now provide a 2.488 Gbit/s downlink bit rate, as well as several standard uplink bit rates. GPON can transmit for at least 20 km, with the drop ratio of 1:16, 1:32, 1:64 and even 1:128. Therefore, GPON is superior to EPON in terms of the bit rate it can provide, bit rate flexibility, transmission distance and drop ratio. Second, GPON applies two adaptation methods, which includes the traditional ATM and standard GFP (Generic Framing Procedure). GFP can encapsulate all sorts of signals into the existing SDH network with considerable flexibly and efficiently. It can be applied to different types of signal format or transmission standards. Third, GPON is capable of flexibly supporting TDM audio service, because the GPON convergent layer is essentially synchronous. In contrast, EPON has no specific TDM regulations, so consequently, manufacturers operate, each in their own way, which leads to poor interoperability and uncertain performance. Fourth, GPON has an abundant number of network management functions, which are considerably more than those provided by EPON. Nevertheless, the EPON network management system is greatly improved compared to ordinary Ethernet, and can meet the basic management requirements.

Generally speaking, GPON is an operator-driven standard. Therefore, operatorsˇ benefits are given more consideration. It features a higher and more flexible bit rate. Its generic mapping format can be applied to any existing or newly formed services. Its OAM&P function provides high transmission efficiency to all sorts of services, even including the TDM service. Thus, it is only natural to assume that GPON may become the final solution that will allow operatorsˇ to execute a smooth transition, from their old traditional TDM, to the more current all-IP network.

Apart from system technologies, FTTH also involves active/passive optical components, fiber-optic cable technology, connection technology, laying technology, test technology, network management technology, and so on. However, breakthroughs in these areas are required. Hence, any bottleneck in these technologies, cost and operation may hamper the large-scale development of FTTH. Therefore, FTTx, and especially FTTH, still require lots of fundamental work and R&D before these technologies become fully applicable. Although recently the price of FTTH equipment has dropped considerably, the price of EPON still remains at about $300 per user, which is nearly ten times more than ADSL. There are still high risks involved where video services are concerned, largely due to uncertain policies and an unpredictable market. As a result, China is not quite ready for any large-scale commercial use of FTTH, for it is still in the early stages of undergoing field testing and trail commercial use. As the 2008 Olympic Games and 2010 World Expo draw closer, so will the practical use of FTTH in China. FTTH is no longer considered to be too lofty a goal to achieve, but with continued patience and proper preparation the road to success may be just around the corner.


from:
http://www.huawei.com/publications/view.do?id=288&cid=95&pid=61

[ Print ]  [ Back ]