"....semua makhluk ciptaan Tuhan samada manusia,binatang,tumbuhan, alam semulajadi dan sebagainya,saling perlu memerlukan,saling bantu-membantu kerana mereka berkait,terikat antara satu sama lain dalam satu kitaran yang berhubungan. Justeru, jangan diputuskan ikatan itu, kelak, seluruh kitaran akan musnah..." Ahmad Rais Johari
Friday, December 23, 2011
Private Finance Initiative (PFI) Seminar Key Note Address by YB Tan Sri Nor Mohamed Yakcop
Key Note Address
By:
YB Tan Sri Nor Mohamed Yakcop
Minister of Finance II
10 November 2006
Impiana Hotel, Kuala Lumpur
YBhg. Dato Shahrir Abdul Jalil,
Managing Partner of Shahrizat Rashid & Lee,
Mr. Alan Jenkins,
Chairman of Eversheds,
H.E. Mr. Boyd McCleary,
British High Commissioner to Malaysia
Distinguished Guests,
Ladies and Gentlemen,
Assalamualaikum w.b.t and Good Morning,
I would like to express my appreciation to the organisers for inviting me to speak at today's seminar on Private Finance Initiatives (PFI). This seminar is indeed timely and I would like to commend both Eversheds and Shahrizat Rashid & Lee for taking this initiative, a private initiative to advance the discussions on implementing PFI in Malaysia.
2. YAB Dato' Seri Abdullah bin Hj. Ahmad Badawi, Prime Minister of Malaysia, first mentioned Private Finance Initiatives (PFI) in his speech at the tabling of the Ninth Malaysia Plan, as a key modality to implement the country's national development agenda going forward. The 15 year National Mission articulated by the Prime Minister is a major challenge for the country, in striving to achieve the vision of developed nation status by 2020. We therefore require the full commitment and effort of both the public and private sectors to achieve Vision 2020. The introduction of the PFI concept by the Prime Minister is a key part of this effort as it involves establishing an optimal relationship in the partnership between the public and the private sector in driving national development.
3. Using PFI in pursuing national development must be seen in the context of the Government's broader policy priorities of energising the private sector as the engine of national economic growth and, at the same time, improving public delivery and services. Strong and sustained growth is required to maintain the trajectory towards Vision 2020. In order to achieve this, the success of the Ninth Malaysia Plan rests heavily on maintaining double digit growth rates for private investment. Towards promoting private sector consumption and investment, the Government has consistently maintained pro-growth economic policies.
4. The 2007 Budget clearly demonstrated the Government's focus on stimulating private sector participation. Firstly, the Budget was expansionary both in terms of expenditure and taxation. Secondly, comprehensive incentives were outlined for private sector participation in new growth sectors, particularly Biotechnology and Islamic Finance. Thirdly, the Government announced initiatives for joint investment between the Government and the private sector to catalyse new investments areas, such as in Southern Johor. Fourthly, the Prime Minister also articulated in the Budget the principles of disclosure, transparency, accountability and mutual trust as principles to enhance public delivery through private sector participation. Overall, the 2007 Budget very much reflects the Government's philosophy of increasingly facilitating a conducive environment for doing business, whether through providing infrastructure, enhancing public delivery or enhancing the tax system and where necessary to promote strategic sectors, providing the private sector with assistance, whether in the form of incentives or joint investment.
Ladies and Gentlemen,
5. The introduction of PFI provides the Government with options going beyond the existing modalities of implementation, which thus far has mainly focused on either privatisation or conventional Government funded projects. In fact, in the Malaysian context, we view PFI in the broadest of terms, as capturing a wide spectrum of options that lie between the two extremes of privatisations and Government projects. In its purest forms, privatisation involves the private sector financing the project entirely and taking all the risks, including revenue and viability risk. Government projects lie on the opposite end of the spectrum, whereby the projects are funded by the Government and the private sector is limited to typically just execution or construction risk. Even then, for Government funded projects, ultimately the Government is still exposed to the risk of having paid progress payments but with the contractor unable to complete the project. PFI, as a broad concept, recognizes the scope for a mutually beneficial arrangement in terms of the different permutations of structuring the relationship between the Government and private sector, particularly in terms of the allocation of risks and financing. The scope for formulating a win-win scenario arises because different projects involve different risks and rewards, and between the private sector and the public sector, certain risks and rewards are best borne by one party compared to the other.
6. In the Malaysian development context, among the key areas identified as suitable for the implementation of PFI include regional development such as for the Southern Johor Economic Region, education, public transportation, health and water infrastructure. As elaborated by the Prime Minister, in the Ninth Malaysia Plan, the PFI approach will be utilized broadly in two circumstances - first, to optimize implementation of Government projects and services; and second, to enhance the viability of private sector projects in strategic or promoted areas.
7. In the first circumstance, optimization in the implementation of Government projects includes both in terms of value for money and also in terms of the quality of public services. Take for example, the construction of a Government building. Undertaken as a conventional construction project, the Government is exposed, in the short run, to completion risk and, in the longer run, to the risk of escalating maintenance costs, especially where the contractor has no interest in ensuring the long term durability of the building. Alternatively the project could be undertaken using a Build, Lease and Transfer approach, whereby the private sector will lease the building to Government say for 20 years at a fixed lease including maintenance. In this structure, the Government does not start paying until the building is satisfactorily completed and ready for use. In the longer run, there is no risk of escalating maintenance costs. Indeed with this structure, the private sector is incentivised to ensure a higher quality of construction to avoid the future burden of high maintenance costs. This simple example demonstrates the scope for value for money by avoiding the risk of maintenance costs and better quality in terms of the building construction.
8. Maintenance is indeed a good example where private sector is well positioned to be more efficient and better able to manage the risk of controlling costs. This applies not only in terms of buildings but also in terms of equipment and transportation facilities such as trains and buses. A key factor in securing the potential benefits of the PFI approach is structuring the arrangement to ensure that the right risks are borne by the right party and that the incentives of the private sector are aligned appropriately. Key performance indicators or service level agreements can be put in place with the appropriate financial carrot and stick to derive the optimal relationship. It is in this context that advisors, both financial and legal, many of whom are present today can help create value, drawing from international experience, to advise the Government and private sector participants in terms of how best to achieve an efficient and equitable sharing of risks and rewards.
9. The Government has already commenced implementation of this type of PFI projects. The projects have been identified and work has started in terms of preparation of designs and award of contracts. Pembinaan BLT Sdn Bhd was formed last year and by the end of this year would have commenced implementation on more than RM 2 billion worth of projects relating to police quarters and buildings, using a Build Lease and Transfer approach. Under the Ninth Malaysia Plan, an amount of RM20 billion worth of projects, including schools and Government buildings, was also approved to be implemented using the Build Lease Transfer approach. In the international experience, the efficiency savings from private sector bearing the risks have often been partially offset by the higher cost of financing by the private sector. In the model implemented by Government under for example Pembinaan BLT, not all of the risks have been transferred to the private sector. However, the financing has been secured based on the lower Government cost of funding. Going forward, we expect to engage with the private sector on different permutations of risk and reward sharing towards continually improving our PFI structures.
Ladies and Gentlemen,
10. In the second circumstance identified by the Prime Minister, the Government will help enhance the viability of private sector or privatisation projects in strategic or promoted areas. The basic rationale here is that there are various potential private projects which could be on the borderline in terms of viability and may therefore not be implemented. However, amongst these projects, there would be some which are highly beneficial to the country, in the sense that it would result in significant benefits and spinoffs, which are public good in nature and not be fully captured by the private sector party. With a little assistance from the Government, these projects can be implemented with the private sector bearing all the risk and accruing the private returns, and at the same time the country benefits. Again, this would be a win-win arrangement between the Government and private sector.
11. The Prime Minister has already announced a facilitation fund of RM 5 billion to provide such support. Thus far, the Government has already announced that the 2nd Penang Bridge will proceed on this basis. The principle is well demonstrated here in the sense that whilst a privatized concession alone may not be sufficient to finance the project, the project will result in large spinoffs for the development of the Northern region and thus justifies Government support. To evaluate projects such as these, a central PFI unit has been formed, with its secretariat based in the Economic Planning Unit.
12. This approach of enhancing viability of private sector projects will also be utilized where Government assistance can play a role in catalysing and create a momentum of investment in new growth areas. As mentioned earlier, such measures were announced in the 2007 Budget.
13. One such measure was the formation of the Creative Industry Development Fund with an initial allocation of RM 100 million. The Fund will be used to jointly invest with private sector parties in developing export quality media content. Amongst private sector parties identified to participate include Media Prima and TM. We have seen the success of the Indian and Korean film industries. We believe Malaysia is not short of talent. Thus, the Government believes a focus towards producing high quality content, whether in the areas of film, animation, computer games or theatre has the potential to develop into a thriving industry. Whereas in the past the Government promoted new growth industries primarily through tax incentives, a PFI approach is now available as another modality to build up a strategic infant industry.
Ladies and Gentlemen,
14. In addition to new industries, the PFI approach will be utilized to develop new regions. In the 2007 Budget, in addition to the amounts to be spent on infrastructure, a specific allocation of RM 200 million was provided to establish a strategic investment fund for the Southern Johor Economic Region. The fund will be utilized to spur investments in new industry clusters, particularly for private sector education and healthcare. Towards catalyzing a more rapid development of these clusters, the fund will be used as an incentive to support and jointly invest with the early entrants.
15. In addition to providing support through joint investments with private sector parties, we expect there are many innovative means to help private sector parties to enhance viability in a mutually beneficial manner. To assist the initial entrants of private universities and hospitals into Southern Johore, the Government could enhance viability by committing to procure a level of services in the future, such as sending a certain number of Government sponsored students to these universities. The commitment however would be tied to performance criteria such as quality of education and employability of graduates. This can operate as an incentive for the university to improve itself towards securing more Government sponsored students.
Ladies and Gentlemen,
16. The success of the PFI rests on getting an optimal partnership between public and private sector in terms of sharing the risks and rewards, in addition to incentivising the alignment of interests. Well structured, a PFI approach will be mutually beneficial in providing the private sector a market return and providing the Government with value for money, higher quality of public services and broader economic spinoffs. I am confident that the PFI approach will increasingly play a larger role in promoting strategic private sector investments. The Government looks forward to engaging with the private sector in developing workable and efficient PFI models towards advancing the national development agenda.
17. It is through seminars such as these that both public and private sector participants are able to gain insights from international experience in order to develop applications for Malaysia. I would like to again thank the organizers and sponsors making today's seminar possible, and wish all of the participants today a fruitful discussion.
Thank you.
10 November 2006
Brocade’s network hardware price model: Pay-as-you-go
Why should you buy your switches and routers when you can rent them month-to-month? Brocade is offering that option as of this week with its new Brocade Network Subscription, a pay-as-you-go network hardware price model.
IT cost reduction has always been an issue for enterprises, particularly with network hardware prices. Cisco Systems’ customers joke about a “Cisco tax” because the company charges premium prices for its equipment; meanwhile, vendors like HP Networking and Juniper Networks win over deals by offering lower list prices on their switches and routers.
Attempting to drive down costs, many organizations turn to leasing network infrastructure rather than buying it. But this only shifts costs from capital to operational, and lease agreements bind a customer for a minimum number of years, charging a penalty if the enterprise backs out of the deal early.
Brocade’s new network hardware price model, announced at VMworld this week, is a month-to-month “rental” of network infrastructure, which won't necessarily bring down costs, but will enable IT shops to try on new technology for size with the ability to return or exchange without penalty—and that could mean overall savings if companies are able to avoid overbuying or investing in technology that doesn't work for them.
The program, available immediately, covers all of Brocade’s IP/Ethernet products and includes Essential Support from Brocade Global Services. Brocade hasn’t published the actual subscription rates for the program, but it is offering free quotes on its website. The company will also continue to offer its original network hardware price scheme alongside Brocade Network Subscription.
Pay-as-you-go networks could make enterprises early adopters
Aaron Mahler, director of network services at Sweet Briar College in Virginia, is less than halfway into five-year leases from both Juniper and Meraki for the college’s network infrastructure. While Mahler usually leaves network hardware price analysis to his financial officers, the flexibility of a pay-as-you go model intrigues him because it introduces the potential to try new technology.
“If there are no penalties [for canceling a hardware subscription], that would make us much more nimble in terms of scaling with the network we have. If a big shift in technology happens, it would be nice to be able to make that change within the term of our lease. As long as our finance folks look at the numbers and say it makes sense from a total cost perspective, then I would definitely be interested in it.”
Being nimble is especially important at a time when so many new networking technologies are pending. So, for example, as all of the major networking vendors hammer out their data center roadmaps, network managers can use the pay-as-you-go approach to wait out a plan from their preferred vendor, said Andre Kindness, senior analyst with Forrester Research.
“If Juniper had this for their products, customers would feel comfortable with bringing [Juniper’s] EX8200 [into their data centers] and then switch to QFabric down the line. They wouldn’t be as scared to invest. It’s lower risk.”
Pay-as-you-go models also allow organizations to back out of technology that doesn't pan out, mitigating the risks in trying new architectures, according to Mike Spanbauer, principal analyst with Current Analysis. That's helpful considering vendors are currently knee-deep in choosing sides among competing pre-standard technologies like Transparent Interconnection of Lots of Links (TRILL) and Shortest Path Bridging (SPB).
Brocade has rolled out its new VCS data center network fabric, based loosely on TRILL, and its new line of VDX data center switches. With no capital investment and no penalty for backing out, users are much more likely to try the new technology.
“There’s no commitment to a single path necessarily because you can return [the hardware] if it doesn’t work out for you. Once it’s installed you definitely have migration challenges to get off that equipment, but you’d have that challenge with any solution. In this case you don’t have to worry about capital depreciation issues that limited you to only making changes every three years or so,” said Spanbauer.
Economic environment demands new network hardware price models
Beyond enabling technical innovation, pay-as-you-go models may help companies drive down costs.
Whether pay-as-you-go networks are cheaper than those bought with a traditional capital budget will probably depend on how long an enterprise keeps the rented network in place and how well it plans for growth. Most enterprises build a network with a lifecycle of five to seven years with excess capacity to account for growth over that time. A company that builds a pay-as-you-go network can install and pay for only the capacity that is needed, and add more ports when growth is required.
Some vendors have introduced pricing schemes for application delivery controllers and WAN optimization appliances that allow customers to pay a fee for a temporary burst in capacity when needed, said Kindness. Meraki, a provider of wireless LAN infrastructure, also introduced a pay-as-you-go model to its network hardware price scheme earlier this year.
“When pharmaceutical manufacturers buy chemical, they’ll bring in two truckloads of the chemical. But if they only use one truckload, they can send the other one back,” said Kindness. The same need is growing in IT infrastructure spending, he said.
(Source - http://searchnetworking.techtarget.com)
Cisco Live 2011: Catalyst 6500 upgrade the game changer?
LAS VEGAS—Cisco served up comfort food for the networking masses on the first day of Cisco Live 2011, sidestepping edgy cloud announcements and focusing instead on a major Catalyst 6500 upgrade.
Cisco is in full battle mode in the switching market where it has lost some ground to competitors with less expensive equipment, including HP Networking. Some customers had expected Cisco to launch a smaller and less expensive addition to the Nexus line (the Nexus 7009 mentioned at Cisco Live 2010), but the Catalyst 6500 upgrade will enable 25,000 existing customers to upgrade their E-Series chasses without the cost of a rip and replace. The message is that they don't need to go with less expensive and less functional equipment from competitors.
“Our goal and aim was to make sure we could protect those customers' investment,” said Scott Gainey, Cisco director of marketing.
The refresh is centered on the Catalyst 6500 Series Supervisor Engine 2T, a 2-terabit card that triples the throughput capability of the 6500 switch from 720 Gbps to 2Tbps and adds virtualization segmentation. Cisco execs compared the $38,000 Supervisor 2T to HP's A9508 switch, saying customers can triple the performance at one third of the cost with this upgrade.
HP called Cisco's comparison of the Supervisor 2T with HP's A9508 "meaningless." Mike Nielsen, director of solution marketing at HP, said that Cisco is comparing the price of a supervisor engine upgrade with the cost of a complete chassis switch system from HP. He also pointed out that HP launched a new competitor to the Catalyst 6500 platform at Interop, the A10500 series, which outperforms an upgraded 6500.
"HP delivers two times Cisco's performance with the HP 105000. Cisco 2T delivers 80 Gbps per slot; HP 10500 doubles that to 160 Gbps," Nielsen said.
The Catalyst 6500 upgrade also includes 10 Gigabit Ethernet line cards—the 6900 8-port 10G card with baked in TrustSec security and the 6800, which includes two 16-port 10G modules and a 48-port Gigabit Ethernet module. Cisco also announced service modules that enable a high performance next-generation firewall, an application control engine for acceleration and security, more comprehensive NetFlow capabilities and mobility management that enables north of 10,000 devices on one module. Cisco says the combined bandwidth from the cards and supervisor make the Catalyst 6500 40 GbE ready, but the company hasn't announced any 40 GbE ports yet.
Catalyst 6500 upgrade? What about the Nexus transition?
Many believed that the Nexus line was meant to replace the aging Catalyst 6500, but this week at Cisco Live, execs said the two addressed very separate markets with different needs.
“The Nexus was meant to bring 10 Gigabit Ethernet into the data center, but gigabit Ethernet is also enormous and there are segments [other than the data center] that have to be addressed. The 6500 fits the sweet spot of the campus that nobody in the market can keep up with,” said John McCool, senior vice president of data center and switching.
“We see the market bifurcating into a campus-based market that needs rich services and the data center network with convergence that takes a different functionality,” he added.
For those who want to keep existing 6500s in the core and aren't concerned about building a Nexus-based data center and managing two sets of equipment, the release seems only positive.
"The core of the network may not always get the limelight, but it makes or breaks the performance of the applications our faculty, students, and researchers depend upon daily,” said Ed Wilson, network test engineer at Pennsylvania State University, who was part of Cisco's press launch. “The introduction of the Catalyst 6500 Supervisor Engine 2T will extend our investment in Cisco systems.
On the other hand, customers who have invested big into Cisco's server products, the Unified Computing System (UCS), and built a Nexus-based network to support UCS want to see more than a Catalyst 6500 upgrade. Many of these users will eventually take build a core-to-edge 10 GbE network and had gotten the message from Cisco that 6500s would be eventually replaced by the Nexus.
“We're going with the Nexus because it has FCoE capabilities and we're looking at the long-term architecture. Also we need the virtualization abilities of the Nexus” said Rich Parker, security and communications manager at law firm Baker Botts LLP. “I've also heard this is the last supervisor upgrade for the 6500, so that's not an investment we would make.”
Adding speed and functionality to a much-loved switch is never a bad thing, said Gestalt IT founder Stephen Foskett. It's also not the most exciting thing Cisco could have announced when it comes to switching, he said
(Source - http://searchnetworking.techtarget.com)
Thursday, December 22, 2011
EVER WATCHFUL: CyberSecurity Malaysia says policing the trustworthiness of security certificates must be proactive and continuous. - Reuters
This comes in the wake of the revoking of trust by three major Internet browsers against local intermediate certificate authority (CA) DigiCert Sdn Bhd.
Google, Mozilla and Microsoft revoked trust in DigiCert following the issuance of 22 certificates with weak keys, lacking in usage extensions and revocation information.
Security certificates are used as a means of verifying the identity of a website that a user visits. On Nov 3, identity-based security software and services company Entrust, which counts DigiCert as one of its subordinate CAs, issued a statement on its website stating: "Their (DigiCert's) certificate issuing practices violated their agreement, their Certification Practice Statement, and accepted CA standards."
Entrust also globally revoked DigiCert's signing certificates on Nov 8, allowing time for their customers to acquire valid replacement certificates.
According to online reports, two of the weak certificates issued by DigiCert were allegedly used to disguise malware which was used in a targeted attack against another Asian certificate authority. The authority noticed the attack and raised the alarm.
In addition to only having 512-bit encryption, the DigiCert certificates did not contain Extended Key Usage (EKU) - used to tell browsers what type of rights a digital certificate should have and revocation information, which would have allowed for a certificate recall.
In a statement issued on its website, Mozilla expressed concern with the technical practices of DigiCert, which it said was the main reason behind its decision to revoke its trust.
An attacker could use one of these weak certificates to impersonate the legitimate owners. This could deceive users into trusting websites or verify software that appeared to originate from these owners but in actuality could contain malicious software, the company said.
The certificates in question were issued to a mix of Malaysian government websites and internal systems. Mozilla said it did not believe other sites were at risk.
Not the same
Lt Col (Ret) Prof Datuk Husin Jazri, CEO of CyberSecurity Malaysia, said: "From our understanding, the revocation of trust is due to not fully complying with the strict Âstandards required in issuing SSL certificates.
"This is not something that the big browser players are willing to tolerate." An agency under the Ministry of Science, Technology and Innovation, CyberSecurity is also one of DigiCert's clients.
Husin said this incident is unlike the case of DigiNotar, a Dutch CA owned by VASCO Data Security International which experienced a security breach earlier this year, resulting in the fraudulent issuing of certificates, and was later declared bankrupt.
"However, big players like Mozilla, Microsoft and Google will not take chances no Âmatter how small the issue is when it comes to trust or security issues because they are in an Âindustry where trust is of utmost importance," he added.
DigiCert issued a statement on Nov 5 and denied any fraudulent activity on its part. "We view the allegations as very serious and we vehemently deny any fraudulent act on our part.
"Nevertheless, we are currently Âinvestigating what had prompted such Âallegations and we are treating this matter as our top priority," DigiCert CEO Mohd Rosdeen Hassan said in the statement.
In a follow-up statement, issued on Nov 7, the company acknowledged the issuance of the certificates with weak keys. In this, it stated: "The SSL 512-bit key certificates issued under Digisign Server ID have mismatched capabilities from the prescribed standards."
Quick work
DigiCert has since revoked the 22 certificates and advised the Internet browser companies to blacklist the certificates in addition to sending out advisories to impacted customers to replace their current Secure Socket Layer (SSL) certificates.
Rosdeen said the process of re-issuing new 2,048-bit security certificates began on Nov 7, with a special task force and a dedicated callcentre set up to answer queries from its customers. "We are going above the minimum prescribed standard (1,024-bit encryption) because we believe this is in the best interest of our clients," he said.
When asked why such weak certificates were issued in the first place, Rosdeen said the reason for the issuance of the 512-bit key certificates was prompted by requests on their clients' part.
"Certain clients felt that 512-bit was enough for their sites, with stronger encryption Âpotentially having a detrimental effect on the performance of their applications," he said. DigiCert said about 600 sites are impacted by this revocation and the process of changing the certificates would take days because the main hurdle is contacting all the affected parties and guiding them through the process.
Rosdeen said the company is revising its internal policy to incorporate stricter processes on issuance of certificates for all SSL customers and will undertake the employment of a Webtrust program so that in future it will not be dependant on foreign-root CAs.
CyberSecurity's Husin praised DigiCert for its quick action. "It is notable that DigiCert took immediate mitigation steps for all the affected sites," he said. "All of their customers are now signed directly with Entrust."
Bad time
The DigiCert case comes at a time of heightened alerts surrounding CAs, with a growing list of companies that have had to admit they suffered serious attacks on their certificate infrastructure this year.
Husin reported that CyberSecurity is seeing increasing incidents where valid certificates are stolen from computers or servers that store them and are being used to sign malware.
"From these events we see the need for CAs to beef up security and this could be achieved by having proactive and continuous security practices," he said.
Husin said CAs need to be responsive to security incidents reported by security teams or researchers, and exercise the revocation policy more promptly once those incidents are detected.
"The Government could consider Âimplementing stronger audit policies for security certificates, and appoint an agency to enforce them," he said.
Or, he said, CAs in Malaysia could be categorised as a Critical Sector under the Critical National Information Infrastructure (CNII), thus requiring these companies to comply with the more stringent CNII security standards.
Tuesday, December 20, 2011
PEMBANGUNAN PERSONALIA PELAJAR DINAMIK BERDASARKAN MODEL CHICKERING
Monday, December 19, 2011
Memperkasa Pelajar Luar Kampus - Ucaputama Y.Bhg. Dato Prof.Mohd. Noh Dalimin
Pendekatan yang kreatif dan inovatif perlu dilaksanakan oleh pihak Hal Ehwal Pelajar untuk menarik minat mahasiswa tanpa asrama menyertai program yang memberi manfaat kepada masyarakat setempat dimana mahasiswa tinggal supaya mewujudkan suasana harmoni di antara masyarakat dan mahasiswa.
(bersambung)
Sunday, December 18, 2011
17 Ways To Speed Up Your Network -- For Free
By Phil Britt
Got a sluggish network, but don't want to break the bank speeding it up? We've got free and relatively inexpensive help for you. While some of the steps we recommend might include minor hardware upgrades, they are far less expensive than large consulting contracts or "forklift-type" IT upgrades.
To get our tips, we've polled three networking specialists for their advice. They've come up with 17 tips -- here's what they have to say.
Tom Leahy, product marketing manager for IP services at Pittsburg, Pa.-based TelCove, an integrated communications provider that offers Internet, voice, and data solutions, recommends these steps to boost network performance:
1. Assess traffic loads on the network, including the destination and source of all traffic. By moving around some network resources, a company many be able to improve network performance. For example, in a campus environment, if a particular server is being used by people in a common location (i.e., a particular building), the obvious thing to do is to make sure that server is actually located in that building. Otherwise that traffic will bog down other communications that must go between buildings.
2. Optimize IP addressing. This helps minimize the load on routers. The shorter the lookup table a router needs to determine where to send packets, the better.
John Heasley, one of the co-founders of Shrubbery Networks, a Portland, Ore.-based computer and network consulting services company, offers these recommendations:
3. Adjust hosts and network devices to use larger maximum segment size (MSS) at the initial connection or even the maximum transmission unit (MTU) of ~1460 for Ethernet. The old default of 576 is antiquated and most links should support this by now. Just make sure they do not set the DF (Don't Fragment) bit on every frame (Microsoft likes to do this).
In fact, you use IP MTU discovery to increase the MSS over time, but it doesn't help short-lived connections (i.e., for the Web).
4. Turn off IPX. Heasley calls IPX "one of the worst protocols ever." IPX is very chatty, Heasley explains, and, therefore, is very susceptible to any kind of latency. This also reduces overall operating expense because network administrators only have to verify a smaller subset of code for network device software upgrades.
Turning off IPX can also improve overall throughput for networks devices that only support process switching for these (or all) protocols, since these protocols tend to be heavier and less efficient (in terms of overall code efficiency). Netbui can safely be turned off as well.
5. Increase default socket (or streams) send and receive buffer space to at least 64k on all servers and clients.
6. Optimize the router interface access control lists (ACLs). These often become inefficient over time as people add things to existing ACLs and don't delete them when those things are removed from the network.
7. Check Ethernet links for the greatest possible link speed and duplex (mismatches occur) and errors.
8. Increase the overall bandwidth between devices with link bundling (IEEE standard 802.3ad).
9. Use multicast when you can and when it's warranted.
10. Use web browsers that support pipelining. Firefox supports pipelining, but Heasley says that although Internet Explorer supports pipelining, he is not sure that it supports it properly.
11. Make sure routing is efficient. Use a routing protocol rather than static routes to avoid inefficiencies.
12. Avoid loops in switching topologies. Spanning tree protocol (STP) is not good at choosing the best path.
Tim Hebert, chief operating office of Atrion Networking, Warwick, RI, a systems integrator and network services provider doing Cisco infrastructure since 1987, adds the following advice:
13. Look at multicasting settings, which may not be turned on. Multicasting uses a multicast address to send the same data stream to multiple recipients while using the least bandwidth. Without multicasting, multiple unicast-addressed copies of the data stream would have to be sent to individual recipients. Multicast services can control the distribution of multicasts by determining which switch ports need to participate in multicasts.
14. Consider using a private virtual LAN to give certain applications higher priorities.
Ed Keiper, manager of network services for Lantium, Inc., an Audubon, Pa.-based company that provides network services, consulting, and outsourcing, suggests that network performance may be improved by doing the following:
15. Replace hubs with faster-working switches. The cost of switches has dropped significantly, so the improved performance may be well worth the investment. Lantium estimates that the cost of switches are about a third of the cost of hubs -- an estimated $5.53 per port for switches versus an estimated $15.63 per port for hubs.
16. Ensure that the network's fastest systems run the most demanding applications. Sometimes new, faster PCs are added to the network, but some of the most resource-intensive applications remain on older machines. Total network performance suffers as a result.
17. Make sure that any cable runs are short enough for maximum performance. While a system may theoretically be able to handle a cable run of 300 feet, distances of 100 feet will provide much better performance.
This Article Reprinted Courtesy of http://informationweek.com
Saturday, December 17, 2011
Facebook shares some secrets on making MySQL scale
Whеn уου’re storing еνеrу transaction fοr 800 million users аnԁ handling more thаn 60 million queries per second, уουr database environment hаԁ better bе a upset special. Many readers mіɡht see thеѕе numbers аnԁ rесkοn NoSQL, bυt Facebook held a Tech Talk οn Monday night explaining hοw іt built a MySQL environment competent οf handling everything thе companionship needs іn terms οf scale, performance аnԁ availability.
Over thе summer, I reported οn Michael Stonebraker’s stance thаt Facebook іѕ trapped іn a MySQL “fate οf poorer quality thаn death”bесаυѕе οf іtѕ dependence οn аn outdated database paired wіth a complicated sharding аnԁ caching аррrοасh (read thе comments аnԁ thіѕ follow-up post fοr a bevy οf opinions οn thе validity οf Stonebraker’s stance οn SQL). Facebook declined аn official comment аt thе time, bυt last night’s night talk proved tο mе thаt Stonebraker (аnԁ I) mіɡht hаνе bееn incorrect.
Keeping up wіth performance
Kicking οff thе event, Facebook’s Domas Mituzas shared ѕοmе stats thаt illustrate thе importance οf іtѕ MySQL user database:
- MySQL handles pretty much еνеrу user interaction: Ɩіkеѕ, shares, status updates, alerts, requirements, etc.
- Facebook hаѕ 800 million users; 500 million οf thеm visit thе site day аftеr day.
- 350 million mobile users аrе constantly pushing аnԁ pulling status updates
- 7 million applications аnԁ web sites аrе integrated іntο thе Facebook platform
- User data sets аrе mаԁе even Ɩаrɡеr bу taking іntο tab both scope аnԁ time
Anԁ, аѕ Mituzas pointed out, everything οn Facebook іѕ social, ѕο еνеrу proceedings hаѕ a ripple effect thаt spreads beyond thаt specific user. “It’s nοt јυѕt аbουt mе accessing ѕοmе object,” hе ѕаіԁ. “It’s аƖѕο аbουt analyzing аnԁ ranking through thаt include аƖƖ mу friends’ activities.” Thе result (although Mituzas noted thеѕе numbers аrе somewhat outdated) іѕ 60 million queries per second, аnԁ nearly 4 million row changes per second.
Facebook shards, οr splits іtѕ database іntο numerous distinct sections, bесаυѕе οf thе sheer volume οf thе data іt stores (a number іt doesn’t share), bυt іt caches extensively іn order tο write аƖƖ thеѕе transactions іn a rυѕh. In fact, mοѕt queries (more thаn 90 percent) never hit thе database аt аƖƖ bυt οnƖу upset thе cache layer. Facebook relies heavily οn thе open-source memcached MySQL caching tool, аѕ well аѕ іt custom-built Flashcache module fοr caching data οn solid-state drives.
Keeping up wіth scale
Bυt, Facebook wаntѕ tο bυу fewer servers whіƖе still improving MySQL performance. Looking forward, Konetchy ѕаіԁ ѕοmе primary objectives аrе tο automate thе splitting οf large data sets onto underutilized hardware, tο improve MySQL compression аnԁ tο ɡο more data tο thе Hadoop-based HBase data store whеn appropriate. NoSQL databases such аѕ HBase (whісh powers Facebook Messages) weren’t really around whеn Facebook built іtѕ MySQL environment, ѕο here ƖіkеƖу аrе unstructured οr semistructured data currently іn MySQL thаt аrе better suited fοr HBase.
Wіth аƖƖ thіѕ growth, whу MySQL?
Thе logical qυеѕtіοn whеn one sees rampant growth аnԁ performance requirements Ɩіkе thіѕ іѕ “Whу stick wіth MySQL?”. Aѕ Stonebraker pointed out over thе summer, both NoSQL аnԁ NewSQL аrе arguably better suited tο large-scale web applications thаn іѕ MySQL. Perhaps, bυt Facebook begs tο differ.
Facebook’s Mаrk Callaghan, whο spent eight years аѕ a “principal member οf thе technical staff” аt Oracle , сƖаrіfіеԁ thаt using open-source software lets Facebook rυn wіth “orders οf magnitude” more machines thаn people, whісh means lots οf money saved οn software licenses аnԁ lots οf time рƖасе іntο working οn nеw features (many οf whісh, including thе rаthеr-сοοƖ Online Schema Change, аrе discussed іn thе talk).
Additionally, hе ѕаіԁ, thе patch аnԁ update cycles аt companies Ɩіkе Oracle аrе far slower thаn whаt Facebook саn ɡеt bу working οn issues internally аnԁ wіth аn open-source community. Thе same holds rіɡht fοr general support issues, whісh Facebook саn resolve itself іn hours instead οf waiting days fοr commercial support.
On thе performance front, Callaghan noted, Facebook mіɡht find ѕοmе appealing things іf large vendors allowed іt tο benchmark thеіr products. Bυt thеу won’t, аnԁ thеу won’t Ɩеt Facebook publish thе results, ѕο MySQL іt іѕ. Plus, hе ѕаіԁ, уου really саn tune MySQL tο perform very qυісk per node іf уου know whаt уου’re doing — аnԁ Facebook hаѕ thе best MySQL team around. Thаt аƖѕο helps keep costs down bесаυѕе іt requires fewer servers.
Callaghan wаѕ more open tο using NoSQL databases, bυt ѕаіԁ thеу’re still nοt reasonably ready fοr primetime, especially fοr mission-critical workloads such аѕ Facebook’s user database. Thе implementations јυѕt aren’t аѕ mature, hе ѕаіԁ, аnԁ here аrе nο іn print cases οf NoSQL databases operating аt thе scale οf Facebook’s MySQL database. Anԁ, Callaghan noted, thе HBase engineering team аt Facebook іѕ reasonably a bit Ɩаrɡеr thаn thе MySQL engineering team, suggesting thаt tuning HBase tο meet Facebook’s needs іѕ more resource-intensive process thаn іѕ tuning MySQL аt thіѕ point.
Thе total debate аbουt Facebook аnԁ MySQL wаѕ never really аbουt whether іt ѕhουƖԁ bе using іt, bυt rаthеr аbουt hοw much work іt hаѕ рƖасе іntο MySQL tο mаkе іt work аt Facebook scale. Thе аnѕwеr, clearly, іѕ a lot, bυt Facebook seems tο hаνе іt down tο аn art аt thіѕ point, аnԁ everyone appears pretty content wіth whаt thеу hаνе іn рƖасе аnԁ hοw thеу рƖοt tο improve іt. It doesn’t seem Ɩіkе a fate οf poorer quality thаn death, аnԁ іf іt hаԁ tο ѕtаrt frοm scratch, I don’t ɡеt thе impression Facebook wουƖԁ ԁο tοο much another way, even wіth thе nеw database offerings unfilled today.
Network software bugs: Are Cisco and others doing enough?
It seems that the IT Industry is willing to accept that software bugs are unavoidable and that licensing agreements, along with patches, absolve vendors from any responsibility. That may be why there is so little hubbub around what I sense to be an increase in network software problems – and specifically Cisco IOS bugs.
It's not that bugs in general are a new issue. Microsoft releases between 20 to 60 patches per month for critical bugs. But with Cisco IOS software, I have noticed a significant decline in product reliability over the last two or three years, which is suspiciously the same timeframe as the company's financial problems. Maybe I am paranoid, but I have to wonder if Cisco is cutting corners on testing and validation programs in its Indian development centers
I’ve learned that IOS software development is segmented into verticals: BGP, IP Multicast, OSPF, MPLS, etc. All of these are developed in independent teams with their own budgets and management. But there seems to be a gap in end-to-end testing. For example, I wonder if there is testing of BGP and IP Multicast integration or MPLS andOSPF integration.
Why are bugs so troubling in networking?
In an ITIL-compliant world, bugs are an identified risk and projects allocate hundreds or thousands of man hours to testing and validation in an attempt to locate product flaws. The cost of customer-driven network validation and testing has risen dramatically in the last five years. The trend is proven in the wide range of new testing products and solutions.
On one hand, this is not a bad thing as we can now build better networks. But for every bug found, the network is undermined. There is already a significant perception in IT management circles that the network is unreliable and risky. That’s why getting change windows for regular upgrades is almost impossible
When will vendors do more?
Some people say that vendor technical support is here to fix these problems, but that's not why I pay for this service. I pay tech support for hardware failures, software upgrades and configuration support, not to receive a half-finished product.
Which leads to the question: Are vendors delivering faulty products? If customers are going to perform their own testing, locate bugs and then advise the vendors through tech support programs (paid for by the customer), then what motivates the vendor to keep software quality high?
It is true that the complexity of modern products means that some bugs or product flaws will occur. But if vendors scale back their testing programs to save money, who suffers? And who will know?
(Source - http://searchnetworking.techtarget.com/)
Network technology trends 2012: Out-of-band management and DevOps
How will virtual desktop infrastructure (VDI) help enterprises with IT consumerization?
Lori MacVittie: We're back in that world where we had three different versions of Windows and asking how we support all these applications. We're seeing that with all the different tablets and smartphones and laptops. We've got applications that might not necessarily work very well on tablets, and we want to make sure that users can get to those. But we don’t want to write native clients. It's just not feasible for IT to write applications for 50 different operating systems and platforms.
So if you pull virtual desktop infrastructure into the picture, it controls the application in the VDI environment. It keeps the data inside the data center for the most part, so you can still apply the right security. And you get a little bit more control without constraining the end user. They get to use the device where they want, when they need it. But you don't have to worry about the management of the actual endpoint. Of course that has an impact on the network because you're talking about new and different protocols and more devices. Some people like to multi-task. There is a lot of traffic and there are a lot of changes to infrastructure that have to be made to support something that.
What will enterprises have to do on the network side to support all of this virtual desktop infrastructure resulting from IT consumerization?
MacVittie: One of the first things is managing access. Who are we going to allow? From where? And over what network? One of the interesting things about the phones and even some tablets today is you can connect over both the mobile network as well as your Wi-Fi. I can turn on Wi-Fi on my phone, and suddenly I'll be on Wi-Fi network instead of the mobile network. That particular piece of information to the network is important. In the case of being on Wi-Fi you know that my phone is in the building on the network. If I'm coming over the mobile network I could be anywhere. There may be a need to control access from certain locations, such as saying this information can't be delivered outside the building. So if you're coming in over a corporate Wi-Fi connection, I'll let you have it, but if you're coming over a mobile carrier network, I don't know where you are and you can't have it.
That ability to dig down and see who you are, what you are using, where you are and what it is you want is going to be important to controlling who is going to get access. That's a lot of traffic going back and forth. You need to identify the user, you have to pick up the information out of the data that's being transferred and the protocols themselves, and you need to be able to make intelligent decisions about it and start sending people to the right places. So I think that access management layer is going to be very important, just trying to keep control of what you can: the resources and the applications.
F5 has mentioned that the out-of-band management network will be a technology trend in 2012. Whatever happened to it in the first place?
MacVittie: The networks got so fast and so fat that we didn’t have a problem with congestion. So we could keep it all on the same network. It was easier, and everything was static. We didn’t really need to have real-time [management] communication. If we needed to get some information from a switch, we could pull it with SNMP. It wasn't imperative that we got it in 0.5 seconds. If I got it in 3 seconds or 5 seconds that was fine, because I was really just digging for information or running a report or trying to hook it up to some bigger management system like [HP] OpenView.
Why do you think the out-of-band management network is coming back?
Find out how NYSE-Euronext built an out-of-band management network
MacVittie: As we're seeing all these things getting more dynamic, and [enterprises] want to provision [services] on demand, that requires a lot of interaction and it can be very time-sensitive. We need be sure that if that if we need more capacity that message actually gets to all the components involved at the right time; that it's not delayed; that it's not lost.
Automation is going to make us again more sensitive to the ability of all those components to receive things in a timely way, and that may require out-of-band management networks because the traffic on [production] networks is increasing. We have a lot of video and twice the number of applications and devices. What do you prioritize? Are we going to prioritize provisioning traffic over the CEO getting his email? I don't know; that's not a question I want to answer. I want make sure that both are just as fast as they need to be.
How do you build an out-of-band management network?
MacVittie: It's either a completely separate VLAN or a completely separate physical network, so that we can make sure it's got the speed and the bandwidth and that everything on it is actually management traffic.
As things continue to get integrated and we start looking at solutions where we've got this entire integration framework where network components are starting to be more dynamic in their configuration and actions, we're going to need a lot more collaboration and an entire set of systems and architecture to be able to support all that automation and orchestration in the network.
We talk about virtualization of the network, and we say, let's assume that every component in the network is going to become virtualized. What does that mean? That means a whole lot of management and a whole lot of communication between some other system that's managing when something gets provisioned, where it gets provisioned, where it's hooked up to, the topology [behind it]. There is a whole lot of communication and integration that has to go on in order to make sure that dynamic network actually works. It's really easy to push a button and provision a switch. It's not so easy to push a button, provision a switch and actually have it configured and doing what it needs to be doing. That's going to require a lot of integration [and] a lot of management. So there's going to be a lot of traffic and a lot of communication going on. And that's going to start taking up bandwidth. Yes, we've got really fat pipes right now and really good networks. We're talking 40 gigabit at the core, and most people say that we'll never hit that. We never thought we'd need 10 gigabit either, but apparently we do.
How would a network architect determine that it's time to establish an out-of-band management network?
MacVittie: I'm a fan of proactivity, but that's not always realistic. I think [most people will [establish an out-of-band management network] at a point where it starts to be very difficult to separate that management traffic from actual business and customer traffic; when the lines between that become very difficult; when it's really hard to find what you need to see on the network; when your span ports are overloaded and you're losing packets and information; when you're not getting all the data you need, and you can't figure out why something didn't launch; or when some configuration failed and you didn’t see it.
F5 has predicted that networks will have to integrate with scripting technologies like Chef and Puppet. Why?
MacVittie: Chef and Puppet are the two primary tools of the DevOps movement. It's the attempt to bring development methodologies and processes to IT operations. They allow you to create scripts that automate the configuration of a virtual machine or a BIG-IP or a switch or some other solution in the network. That's why the network API and the ability to integrate become more important. What the DevOps guys are tasked with is, ‘Here is this application. I need you to build this deployment script that is going to deploy the virtual machine to the right place; make sure the load balancer is configured, add these firewall rules and hook it to X, Y and Z.” So they take it and they build this package and they use things like Chef and Puppet to communicate with the different networking components and tie them together into an automated deployment package so they can just go click, deploy. And when someone says I need to launch another instance they can say click, deploy, and everything gets hooked up correctly.
I think probably not enough network guys are aware of this. The DevOps guys are growing out of the server admins and app admins who are coming in and trying to focus on operations. Also, the network guys don’t want people to run a script against their switch and router. And who can blame them? We had these arguments many years ago when programmable routers showed up. Are you crazy? You're not going touch our core router. So I think there is a lot of resistance from the networking team to allow these guys to come in and do these things. But ultimately it's going to be very important.
The network is a very important piece of getting an application out and delivered. If we can't include the network in that automation and that ability to orchestrate that and create repeatable, successful deployment packages that encompass the entire network, that's what's driving [the sentiment of] ‘we hate IT, let's go to the cloud and not have to worry about switches and firewalls.’ I think that kind of cultural transformation within the network team has to happen if they are going to continue to be relevant and a part of the dynamic data center as it's evolving.
So what role do networking pros have to play? Do they need to open up their infrastructure to be manipulated by these scripting technologies?
MacVittie: They have to be aware that it's there, aware that it's necessary and form their own team of guys who provide access to other teams to do this. Or, as they look at refresh cycles, they should start looking at infrastructure in networks that has more role-based access to APIs. So you can say: ‘OK, you developers are on this VLAN so I'm going to let you mess with it. And whatever happens, it's your problem, not mine. But you can't touch the finance VLAN because it's very critical to the business.’ They need to become the gatekeepers as opposed to the dungeon guards.
How do n of capabilities of individu
MacVittie: I'm a developer by trade, so I would say play with it. But that's not feasible for most network guys. Most networking guys are well-versed with scripting languages but not with the development side that these APIs require. So they would need to ask vendors, ‘Do you have an open management API? And what development languages are supported?’ Conversely, they could go to their DevOps guys and ask, ‘What are you using?’ Then use that to evaluate. Say, ‘do you support these things because these are what we are standardizing on? Even though I don’t understand what Chef or Puppet or REST PHP-based API means, it's what I need.’ So they need to get that list together and ask those questions.
It's also important to look at some of the management vendors. Your traditional questions are still relevant and may become more relevant, because CA, IBM and VMware are moving into that space and becoming more aware that it's about managing the entire infrastructure, not about grabbing some stats via SNMP. It's no longer about a MIB. I have to be able to control you through a much easier interface, and that means doing traditional Web-based and REST APIs and scripting languages. These are things that networking guys may not be comfortable with, but getting that list together and just asking the standard questions is important.

