Saturday, December 17, 2011

Network software bugs: Are Cisco and others doing enough?

by Greg Ferro, Fast Packet Blogger


It seems that the IT Industry is willing to accept that software bugs are unavoidable and that licensing agreements, along with patches, absolve vendors from any responsibility. That may be why there is so little hubbub around what I sense to be an increase in network software problems – and specifically Cisco IOS bugs.

It's not that bugs in general are a new issue. Microsoft releases between 20 to 60 patches per month for critical bugs. But with Cisco IOS software, I have noticed a significant decline in product reliability over the last two or three years, which is suspiciously the same timeframe as the company's financial problems. Maybe I am paranoid, but I have to wonder if Cisco is cutting corners on testing and validation programs in its Indian development centers

I’ve learned that IOS software development is segmented into verticals: BGP, IP Multicast, OSPF, MPLS, etc. All of these are developed in independent teams with their own budgets and management. But there seems to be a gap in end-to-end testing. For example, I wonder if there is testing of BGP and IP Multicast integration or MPLS andOSPF integration.

Why are bugs so troubling in networking?

In an ITIL-compliant world, bugs are an identified risk and projects allocate hundreds or thousands of man hours to testing and validation in an attempt to locate product flaws. The cost of customer-driven network validation and testing has risen dramatically in the last five years. The trend is proven in the wide range of new testing products and solutions.

On one hand, this is not a bad thing as we can now build better networks. But for every bug found, the network is undermined. There is already a significant perception in IT management circles that the network is unreliable and risky. That’s why getting change windows for regular upgrades is almost impossible

When will vendors do more?

Some people say that vendor technical support is here to fix these problems, but that's not why I pay for this service. I pay tech support for hardware failures, software upgrades and configuration support, not to receive a half-finished product.

Which leads to the question: Are vendors delivering faulty products? If customers are going to perform their own testing, locate bugs and then advise the vendors through tech support programs (paid for by the customer), then what motivates the vendor to keep software quality high?

It is true that the complexity of modern products means that some bugs or product flaws will occur. But if vendors scale back their testing programs to save money, who suffers? And who will know?

(Source - http://searchnetworking.techtarget.com/)

Network technology trends 2012: Out-of-band management and DevOps

With the new year nearly upon us, SearchNetworking.com met with Lori MacVittie, F5 Networks’ technology evangelist and senior technical marketing manager, to talk about major networking technology trends for 2012. She said network engineers will increasingly turn to virtual desktop infrastructure (VDI) as a way to get a handle on the megatrend of IT consumerization. Increased traffic on dynamic infrastructure will also force networking pros to bring back the out-of-band management network. Finally, network managers will have to open their networks up to more integration with DevOps teams, bringing back nightmares of a bygone era of programmable routers.

How will virtual desktop infrastructure (VDI) help enterprises with IT consumerization?

Lori MacVittie: We're back in that world where we had three different versions of Windows and asking how we support all these applications. We're seeing that with all the different tablets and smartphones and laptops. We've got applications that might not necessarily work very well on tablets, and we want to make sure that users can get to those. But we don’t want to write native clients. It's just not feasible for IT to write applications for 50 different operating systems and platforms.

So if you pull virtual desktop infrastructure into the picture, it controls the application in the VDI environment. It keeps the data inside the data center for the most part, so you can still apply the right security. And you get a little bit more control without constraining the end user. They get to use the device where they want, when they need it. But you don't have to worry about the management of the actual endpoint. Of course that has an impact on the network because you're talking about new and different protocols and more devices. Some people like to multi-task. There is a lot of traffic and there are a lot of changes to infrastructure that have to be made to support something that.

What will enterprises have to do on the network side to support all of this virtual desktop infrastructure resulting from IT consumerization?

MacVittie: One of the first things is managing access. Who are we going to allow? From where? And over what network? One of the interesting things about the phones and even some tablets today is you can connect over both the mobile network as well as your Wi-Fi. I can turn on Wi-Fi on my phone, and suddenly I'll be on Wi-Fi network instead of the mobile network. That particular piece of information to the network is important. In the case of being on Wi-Fi you know that my phone is in the building on the network. If I'm coming over the mobile network I could be anywhere. There may be a need to control access from certain locations, such as saying this information can't be delivered outside the building. So if you're coming in over a corporate Wi-Fi connection, I'll let you have it, but if you're coming over a mobile carrier network, I don't know where you are and you can't have it.

That ability to dig down and see who you are, what you are using, where you are and what it is you want is going to be important to controlling who is going to get access. That's a lot of traffic going back and forth. You need to identify the user, you have to pick up the information out of the data that's being transferred and the protocols themselves, and you need to be able to make intelligent decisions about it and start sending people to the right places. So I think that access management layer is going to be very important, just trying to keep control of what you can: the resources and the applications.

F5 has mentioned that the out-of-band management network will be a technology trend in 2012. Whatever happened to it in the first place?

MacVittie: The networks got so fast and so fat that we didn’t have a problem with congestion. So we could keep it all on the same network. It was easier, and everything was static. We didn’t really need to have real-time [management] communication. If we needed to get some information from a switch, we could pull it with SNMP. It wasn't imperative that we got it in 0.5 seconds. If I got it in 3 seconds or 5 seconds that was fine, because I was really just digging for information or running a report or trying to hook it up to some bigger management system like [HP] OpenView.

Why do you think the out-of-band management network is coming back?

Find out how NYSE-Euronext built an out-of-band management network
MacVittie: As we're seeing all these things getting more dynamic, and [enterprises] want to provision [services] on demand, that requires a lot of interaction and it can be very time-sensitive. We need be sure that if that if we need more capacity that message actually gets to all the components involved at the right time; that it's not delayed; that it's not lost.

Automation is going to make us again more sensitive to the ability of all those components to receive things in a timely way, and that may require out-of-band management networks because the traffic on [production] networks is increasing. We have a lot of video and twice the number of applications and devices. What do you prioritize? Are we going to prioritize provisioning traffic over the CEO getting his email? I don't know; that's not a question I want to answer. I want make sure that both are just as fast as they need to be.

How do you build an out-of-band management network?

MacVittie: It's either a completely separate VLAN or a completely separate physical network, so that we can make sure it's got the speed and the bandwidth and that everything on it is actually management traffic.

As things continue to get integrated and we start looking at solutions where we've got this entire integration framework where network components are starting to be more dynamic in their configuration and actions, we're going to need a lot more collaboration and an entire set of systems and architecture to be able to support all that automation and orchestration in the network.

We talk about virtualization of the network, and we say, let's assume that every component in the network is going to become virtualized. What does that mean? That means a whole lot of management and a whole lot of communication between some other system that's managing when something gets provisioned, where it gets provisioned, where it's hooked up to, the topology [behind it]. There is a whole lot of communication and integration that has to go on in order to make sure that dynamic network actually works. It's really easy to push a button and provision a switch. It's not so easy to push a button, provision a switch and actually have it configured and doing what it needs to be doing. That's going to require a lot of integration [and] a lot of management. So there's going to be a lot of traffic and a lot of communication going on. And that's going to start taking up bandwidth. Yes, we've got really fat pipes right now and really good networks. We're talking 40 gigabit at the core, and most people say that we'll never hit that. We never thought we'd need 10 gigabit either, but apparently we do.

How would a network architect determine that it's time to establish an out-of-band management network?

MacVittie: I'm a fan of proactivity, but that's not always realistic. I think [most people will [establish an out-of-band management network] at a point where it starts to be very difficult to separate that management traffic from actual business and customer traffic; when the lines between that become very difficult; when it's really hard to find what you need to see on the network; when your span ports are overloaded and you're losing packets and information; when you're not getting all the data you need, and you can't figure out why something didn't launch; or when some configuration failed and you didn’t see it.

F5 has predicted that networks will have to integrate with scripting technologies like Chef and Puppet. Why?

MacVittie: Chef and Puppet are the two primary tools of the DevOps movement. It's the attempt to bring development methodologies and processes to IT operations. They allow you to create scripts that automate the configuration of a virtual machine or a BIG-IP or a switch or some other solution in the network. That's why the network API and the ability to integrate become more important. What the DevOps guys are tasked with is, ‘Here is this application. I need you to build this deployment script that is going to deploy the virtual machine to the right place; make sure the load balancer is configured, add these firewall rules and hook it to X, Y and Z.” So they take it and they build this package and they use things like Chef and Puppet to communicate with the different networking components and tie them together into an automated deployment package so they can just go click, deploy. And when someone says I need to launch another instance they can say click, deploy, and everything gets hooked up correctly.

I think probably not enough network guys are aware of this. The DevOps guys are growing out of the server admins and app admins who are coming in and trying to focus on operations. Also, the network guys don’t want people to run a script against their switch and router. And who can blame them? We had these arguments many years ago when programmable routers showed up. Are you crazy? You're not going touch our core router. So I think there is a lot of resistance from the networking team to allow these guys to come in and do these things. But ultimately it's going to be very important.

The network is a very important piece of getting an application out and delivered. If we can't include the network in that automation and that ability to orchestrate that and create repeatable, successful deployment packages that encompass the entire network, that's what's driving [the sentiment of] ‘we hate IT, let's go to the cloud and not have to worry about switches and firewalls.’ I think that kind of cultural transformation within the network team has to happen if they are going to continue to be relevant and a part of the dynamic data center as it's evolving.

So what role do networking pros have to play? Do they need to open up their infrastructure to be manipulated by these scripting technologies?

MacVittie: They have to be aware that it's there, aware that it's necessary and form their own team of guys who provide access to other teams to do this. Or, as they look at refresh cycles, they should start looking at infrastructure in networks that has more role-based access to APIs. So you can say: ‘OK, you developers are on this VLAN so I'm going to let you mess with it. And whatever happens, it's your problem, not mine. But you can't touch the finance VLAN because it's very critical to the business.’ They need to become the gatekeepers as opposed to the dungeon guards.

How do n of capabilities of individu

MacVittie: I'm a developer by trade, so I would say play with it. But that's not feasible for most network guys. Most networking guys are well-versed with scripting languages but not with the development side that these APIs require. So they would need to ask vendors, ‘Do you have an open management API? And what development languages are supported?’ Conversely, they could go to their DevOps guys and ask, ‘What are you using?’ Then use that to evaluate. Say, ‘do you support these things because these are what we are standardizing on? Even though I don’t understand what Chef or Puppet or REST PHP-based API means, it's what I need.’ So they need to get that list together and ask those questions.

It's also important to look at some of the management vendors. Your traditional questions are still relevant and may become more relevant, because CA, IBM and VMware are moving into that space and becoming more aware that it's about managing the entire infrastructure, not about grabbing some stats via SNMP. It's no longer about a MIB. I have to be able to control you through a much easier interface, and that means doing traditional Web-based and REST APIs and scripting languages. These are things that networking guys may not be comfortable with, but getting that list together and just asking the standard questions is important.

Wednesday, December 14, 2011

AN EXPLOSIVE DEBATE OVER CPB 2011 GOES DOWN AT MOSTI OPEN HOUSE

by Wern Shen
Tuesday, 13 December 2011 04:46 PM


The scene at this morning’s “open house” discussion between ICT professionals and members of the board behind the controversial Computing Professionals Bill was nothing short of explosive. Since the doors swung open at 9.30am, the hall of building C4 was filled to the brim with more than 50 ICT practitioners, all equally upset at the arbitrary draft which emerged last week.

Hosting the “open house” debate were four panelists (seated from left to right above) – Professor Zaharin Yusoff (UNIMAS), En. Shaifubahrim Saleh (PIKOM), Professor Dr. Halimah Badioze Zaman (National Professors Council), and Dato Dr. Raja Malik Raja Mohamed (Malaysian National Computer Confederation) – who tried to steer the proceedings of the morning’s discussions. Unfortunately for them, the band of ICT professionals in attendance weren’t sparing any punches, and tempers flared as simple questions were met with seemingly scripted answers.

Story continues after the jump.
“The intention was never to restrict the ICT community,” urged Professor Zaharin when questioned over the potential implications which could arise from the Bill’s restriction on unqualified ICT workers. “In fact, this bill was drafted to push the profile of the Malaysian ICT community. By implementing these kinds of standards, we can get our people to be recognized internationally.”

“We need this to set the standards,” added Professor Dr. Halimah in reference to the current crop of unemployed IT graduates. “The ICT industry is in trouble and is nowhere near as strong as it was in ‘94. Back then, we had the cream of the crop working the industry,” she added. “With this Bill, studying IT would be just as appealing as studying medicine or engineering, because the graduates can call themselves professionals.”

Needless to say, these statements didn’t go down well with the crowd. An attendee making a point during this morning's proceedings.

“That’s ridiculous,” shouted an attendee in retaliation. “We should move away from the mindset that only graduates can be professionals! I have worked in the industry for more than 10 years without your certification. Does that mean that I am not a professional?”

“That’s why we are opening up membership to existing computing professionals too. It isn’t only open to graduates,” interjected Professor Zaharin.

Although his response holds true, the path for “membership” under the current draft is an uncertain one. Existing IT professionals without BCPM (Board of Computing Professionals Malaysia) accredited qualifications will first have to be reviewed by the board before they can be considered a registered computing practitioner.

“What about our existing certifications,” shouted yet another disgruntled attendee. “My certification is recognized by the USDOD. It is recognized worldwide. Why do I need your certification to work?” he exclaimed.

“We understand your worries, but like we said in the draft, this Bill will only affect people who are involved with CNII,” explained En. Shaifubrahim. “We also know that the current definition of CNII isn’t clear. MOSTI, Cybersecurity and several other parties all have their own definition of what falls under CNII, and this is one area of ambiguity which we will work on clearing up,” he assured.

The vague blanket of CNII was one of the key areas of concern that was shared by today’s attendees. According to Cyber Security Malaysia, Critical National Information Infrastructure (CNII) is defined as those assets (real and virtual), systems and functions that are vital to the nations that their incapacity or destruction would have a devastating impact on national economic strength, national image, national defense and security, government capability to function, and public health and safety.

”Don’t harp on the CNII now,” urged Professor Zaharin. “We will hold more rounds of discussions to address this issue. If this Bill goes into effect, it will only apply to those involved with CNII. For the rest of the industry, it is business as usual.”


A large portion of the crowd in attendance were not impressed by the ambiguous and indirect answers from the panel.


A quick look around the Malaysian blogosphere reveals that a number of people in attendance for this morning’s proceedings seemed less than happy with the outcome.

“We have discussed this issue extensively over social media,” began yet another question from the floor. “Why hasn’t anyone from the board or MOSTI participated in the discussion?”

“We can’t respond to everyone,” answered Dato Dr. Raja Malik. “Appoint a representative, and we’ll talk to him.”

“The fight (against the Bill) is still a long one,” said Daniel Cerventus, an IT professional who is based in KL. “We didn’t get many answers today, just parties pushing the blame around.”


Feedback forms were distributed throughout the session. No word on how well these were received.

The board which was moderating this morning’s session came from varying backgrounds, but noticeably missing from the lineup was a representative from MOSTI. Although they did entertain the lion share of today’s questions, their indirect and indecisive answers just went to show how lightly they were treading on this issue.

We left this morning’s session with the assurance that this was the first of many open ended discussions that would be held in regards to the drafted Bill, and that it would not be passed before consulting more industry players.

Whether or not the promise will be kept is a different story. We do know however, that Malaysian ICT professionals are a united group of people who are fighting for a common goal, and that is the freedom to practice their trade without unreasonable restrictions.

IT Bill may be dumped, says MOSTI - By Shannon Teoh December 13, 2011

PUTRAJAYA Dec 13 — The Science Technology and Innovation Ministry MOSTI said today it would discard the controversial Computing Professionals Bill if the industry can find a better way to boost the local industry towards meeting world-class standards.

Deputy Minister Datuk Fadillah Yusof told reporters after the ministry’s open day on the Bill that it is up to the IT sector to find ways to “uplift the IT profession ”

“There is no decision on whether it the Bill is going to be done or not We can use any other mechanism “That is why we have this open day It is up to the profession to decide how to protect themselves ”

The Petra Jaya MP said IT professionals voiced concern over the Bill when a draft surfaced online last week saying registration under the Board of Computing Professionals will hurt the billion-ringgit industry by shrinking the pool of eligible professionals.

But the ministry’s ICT policy division which is facilitating the process of formulating the Bill told The Malaysian Insider today it would have “no problems” if the industry were to offer a better alternative

Under Secretary Amirudin Abdul Wahab said the move was to reverse Malaysia’s sliding standards in computing as reflected by its drop from 50th place to 56th place in the International Telecommunication Union ICT Development Index between 2002 and 2008

Datuk Halimah Badioze Zaman who is part of the working committee drafting the Bill also said the Bill is not yet finalised having gone through 17 revisions so far “This is the fastest vehicle for us to get there to international standards If not what would be another vehicle to bring us forward ” the National Professors Council member said at the open day.

The Bill was earlier criticised for being unnecessary by about 50 IT practitioners who attended the open day with several accusing the government of “creating a crony club” for favoured companies

The current draft of the Bill seeks to establish a board that will certify individuals and firms who qualify to bid for the government’s Critical National Information Infrastructure CNII projects as computing professionals and computing service providers respectively

The panel of MOSTI advisers said today they were still working on a clear definition of the CNII with the definition used by the government’s cyber security agency being one of the templates Cyber Security Malaysia defines CNII as systems and functions that are vital to the nation that their incapacity or destruction would have a devastating impact on national economic strength image defence public health and safety and the government’s ability to function.

(Sources - http://www.themalaysianinsider.com)

Friday, December 9, 2011

Lembaga Jurukomputer akan ditubuhkan oleh Kerajaan

Kementerian Sains, Teknologi dan Inovasi (MOSTI) telah diberi tanggungjawab bagi mengendalikan penubuhan Lembaga Jurukomputer Malaysia (Board of Computing Professionals Malaysia - BCPM) di Malaysia.

Sehubungan dengan itu, pihak Sekretariat BCPM akan menganjurkan hari terbuka untuk mendapat maklumbalas awam terhadap deraf akta BCPM. Hari terbuka tersebut akan diadakan sebagaimana maklumat di bawah:-


Tarikh : 13 Disember 2011 (Selasa)
Masa : 9.30 pagi – 5.00 petang
Tempat : Dewan Perhimpunan
Aras 1, Blok C4, Kompleks C
Kementerian Sains, Teknologi dan Inovasi

Semua individu yang terlibat secara terus dengan bidang ICT dijemput untuk memberikan pandangan terhadap penubuhan BCPM.

Wednesday, December 7, 2011

Mampu enters third phase of open-source masterplan

PUTRAJAYA: The Government is working to come up with more of its own IT solutions as it enters the third phase of the Malaysian Public Sector Open Source Masterplan.

Launched in 2004 and headed by the Malaysian Administrative Modernisation and Management Planning Unit (Mampu), the masterplan aims to enhance the usage of open-source technologies in the public sector.

In the third phase of the masterplan, Mampu aims to enable, empower and sustain the open-source ecosystem in the public sector.

"We want to enhance and improve our achievements in the second phase and strengthen the public sector open-source ecosystem to be self-reliant," said Datuk Nor Aliah Mohd Zahri, Mampu deputy director general of ICT.

Nor Aliah said this while presenting a paper at the Malaysian Government Open Source Software Conference 2011 here.

In this phase, the role of the Open Source Competency Centre (OSCC) will change to handle a more consultative and regulatory role as Government agencies and ministries set up their own mini OSCCs.

"These mini OSCCs will be the guide, trainer and knowledge centre for the development of Open Source Applications within the various ministries and agencies," Nor Aliah said.

Up until the change, the OSCC was been the single point of reference for open-source developers in the public sector.

Nor Aliah said the OSCC will be transferring its knowledge as a one-stop information centre to the mini OSCCs parked at the ministerial level. The transfer is expected to be completed by the end of next year.

Mampu also has other plans in the third phase to further strengthen the development and adoption of open-source applications within the government sector.

Among them is to have open-source software leaders in all ministries who will lead the efforts in developing open-source based solutions. Mampu also plans to implement a ranking system in ministry offices to gauge the public service's understanding and adoption of open source.

Encouraging signs

According to Mampu, the Malaysian public sector's adoption of open-source software has been encouraging.

Nor Aliah said about 60% of government personnel are trained in open-source development and some are also working for certification.

"Mampu has developed many applications using open-source technologies such as MySurfGuard, MyMeeting and MyWorkspace," she said.

The masterplan will also prepare the country to be a nation that is also a technology supplier and not only a user, Nor Aliah said.

"Open source is the future. The uncertain global economic outlook will drive the adoption of open-source technologies and we will be ready for that," she said.

Sunday, December 4, 2011

Spending on Security Companies Booming, PwC Finds.

By John E Dunn, techworld.com Dec 4, 2011 10:10 pm

The $60 billion global computer security industry has become a hot sector for a range of investors, including mainstream IT companies, aerospace, defense giants and private equity, a PricewaterhouseCoopers (PwC) analysis has reported.

With the exception of the recessionary year of 2009, the last three years has seen an M&A mini-boom with spending on security companies rising every year to reach record heights in 2011, which has already recorded $10.1 billion of deals.

This figure was exaggerated by the huge $7.8 billion Intel paid for McAfee in February, but there have been other notable deals in the current year including the $612 million Dell paid for SecureWorks, and Raytheon's $490 buy of Applied Signal Technology.

The rationale for buying security companies varies from sector to sector. Defense contractors want to diversify as military spending is constrained by financial deficits in many NATO countries, while rival tech companies simply see security as a lucrative element to add to their portfolios.

Private equity and the wider investment community, meanwhile, have simply noticed the sudden interest in security companies and turned up to reap some of its rewards.

Publically-quoted security companies have also benefitted, seeing their price earnings multiples range from the humdrum 14.1 for mature businesses such as Symantec to as much as 51 for smaller companies Fortinet and Sourcefire.

PwC sees no letup in the interest in security on the back of a predicted growth rate in spending on the industry's products of close to 10 percent per annum for at least the next three to five years.

Even with a possible second recession in three years, underlying trends almost guarantee this growth; security is playing catch-up against threats that have evolved more rapidly than people thought likely only half a decade ago.

"Growing threats and awareness, and changes in technology such as mobile devices and cloud computing are key drivers of spending growth in the cyber security market," said PwC's Barry Jaber.

(Source - http://www.pcworld.com)

See more like this: online security, intel, McAfee, dell, mergers

Saturday, December 3, 2011

Dell goes networking, acquires Force10.

By Larry Dignan | July 20, 2011, 4:53am PDT

Summary: Dell said it will wrap Force10’s networking gear into its data center portfolio, which features servers, storage and services.

Dell said Wednesday that it will acquire Force10 Networks as it aims to move into networking.

Terms of the deal weren’t disclosed.

In a statement, Dell said it will wrap Force10’s networking gear into its data center portfolio, which features servers, storage and services.

The big picture here is that servers, storage and networking are increasingly being bundled together. And on the networking front, it appears that every hardware vendor is nibbling at Cisco Systems. Cisco entered the server market with its Unified Computing System. HP responded by buying 3Com and hurting Cisco margins. Cisco is cutting costs and jobs to deal with the new market realities. On Tuesday, Intel said it planned to buy Fulcrum Microsystems, a fabless chip company that makes Ethernet fabrics. Now Dell is buying Force10.

Simply put, every tech vendor is looking to create these data center building blocks of storage, servers and networking for cloud computing.

Force10 has almost $200 million in annual revenue. Dell characterized the Force10 purchase in the same mold as Compellent and EqualLogic. The plan is to take the technology and products and move it through Dell’s sales channels.

Kick off your day with ZDNet's daily e-mail newsletter. It's the freshest tech news and opinion, served hot. Get it.

Source = http://www.zdnet.com)

BTG (IV): Teori Kontroversi Terkini Berkaitan PERADABAN Melayu dan Islam

by Suhaimi Hj Yusoff

1. Karya agung terbaru Historical Fact and Fiction oleh tokoh intelektual negara, Profesor Tan Sri Dr. Syed Muhammad Naquib Al-Attas sudah mencetuskan perdebatan yang agak luar biasa. Karya yang berkisar mengenai peradaban Melayu dan Islam itu telah menarik perhatian ramai pengkaji sejarah kerana dikatakan telah membuka dimensi baru dalam kaitan antara keduanya. Buku ini telah memberikan interpretasi dan tafsiran baru kepada sejarah Nusantara dan Alam Melayu. Isi kandungannya dipersembahkan dalam versi baru yang bertentangan dengan apa yang sudah terakam dalam sejarah negara ini yang diwarisi dari satu generasi ke satu generasi berikutnya. Ianya sama sekali menangkis catatan sejarah mengenai asal usul Melayu dan Islam di rantau ini yang banyak ditulis oleh pelbagai pengkaji Barat.

2. Pertembungan intelektual antara pemikiran Dr. Syed Muhammad Naquib dengan pengkaji sejarah Barat menerusi buku yang tersohor ini telah menjadi tarikan utama kepada pengkaji sejarah tanahair dan antarabangsa untuk menyelusuri sudut pandangan tokoh intelektual Islam ini yang menghabiskan masa puluhan tahun lamanya untuk berkongsi hasil kajian ini. Beliau juga menyentuh mengenai beberapa perkara yang menjadi intipati serta fokus utama yang dimuatkan dalam naskah tersebut. Antaranya adalah mengenai sejarah asal kemasukan Islam ke rantau Nusantara yang sebenarnya dibawa menerusi misi khas oleh beberapa pendakwah dari Tanah Arab.

3. Sekumpulan pendakwah tersebut pula mempunyai hubungan nasab keturunan yang bersalasilah dengan Nabi Muhammad s.a.w. Teori terkini ini telah bercanggah dengan apa yang dipersembahkan oleh fakta sejarah sebelum ini bahawa Islam dibawa masuk ke rantau ini oleh pedagang-pedagang dari India, China dan Parsi. Dalam mengukuhkan teorinya, beliau telah memberi penjelasan bagaimana orang Melayu mampu menguasai bahasa Arab dengan baik tanpa ada pengaruh pelat India atau Parsi. Turut disentuh dalam manuskripnya adalah mengenai proses pengislaman Parameswara yang sebenarnya berlaku sebelum kewujudan Kesultanan Melayu Melaka lagi. Malah, Parameswara juga dipercayai berbangsa Melayu dengan nama Islamnya adalah Muhammad serta merupakan anak seorang Raja Palembang yang bernama Sang Aji.

4. Pendedahan mengenai asal usul Melaka seperti yang dipercayai sekian lamanya berasal dari pohon Melaka adalah tidak benar. Sebaliknya beliau menyatakan 'Melaka' merupakan perkataan yang diambil daripada bahasa Arab bermaksud 'pelabuhan'! Turut didedahkan menerusi buku ini adalah mengenai identiti Raja Muslim pertama kerajaan Samudera-Pasai serta asal usul perkataan Melayu dalam nama Sumatera. Selain itu, amalan mewariskan tahta dalam adat resam orang Melayu ketika zaman kegemilangan kerajaan Melayu dalam kurun ke-14 juga dinukilkan bersama.

5. Namun karya agung ini telah dengan sendirinya sudah mencetuskan impak yang cukup besar dalam bidang teori dan falsafah khususnya terhadap sejarah Melayu dan Islam di negara ini kerana banyak pihak yang sudah tersentak. Gesaan supaya fakta sejarah mengenai Melayu dan Islam yang terkandung dalam buku teks sejarah aliran pendidikan negara dewasa ini dirombak dengan menjadikan karya tersohor Dr. Syed Muhammad Naquib itu sebagai bahan rujukan utama. Perlu juga diakui usaha terhebat itu bukanlah sesuatu yang mudah kerana sejarah mengenai Melayu dan Islam tersebut tidak hanya tercatat di negara ini. Pengkaji Barat yang merupakan pemilik asal teori mengenai Melayu dan Islam yang digunapakai ketika ini juga sudah mempunyai dokumentasi mengikut acuan mereka sendiri.

6. Justeru, usaha mengubah fakta sejarah itu tidak mungkin boleh dilakukan tanpa adanya perdebatan mengenai teori beliau di kalangan pengkaji sejarah terlebih dahulu. Sebagai contoh, pelbagai teori yang diutarakan dalam buku ini telah dibicarakan dalam beberapa siri wacana oleh pelbagai ahli akademik di Indonesia. Malah, ada sebilangan ahli akademik tersebut mengangkat martabat beliau setanding dengan Malek Bennabi iaitu pengkaji sejarah dan pemikir terkemuka dari Algeria yang pernah mencabar teori keislaman Barat menerusi bukunya, Islam in Society and History. Selepas Indonesia, senaraian teori dalam naskah ini akan dibahaskan pula di Turki serta beberapa negara Islam lain.

7. Mengikut jangkaan, terjemahan karya agung ini dalam versi Melayu akan memberikan impak yang lebih besar sedikit masa nanti kerana nilai komersil yang disuntik menerusinya mampu diangkat sebagai bahan rujukan agung yang memungkinkan sejarah mengenai Melayu dan Islam didokumentasi semula dari perspektif pengkaji intelektual Muslim sendiri.


Artikel asal dipetik dari Utusan Malaysia bertarikh Ahad, 20 November 2011 dan diolah semula untuk terbitan dalam ruang maya Balancing The Geoid (BTG).


(Artikel nombor 11 bawah Kategori IV Sosial dan Komuniti: Nusantara)

Monday, November 28, 2011

Data center building, power, and cooling disciplines are not IT disciplines

Data center building, power, and cooling disciplines are not IT disciplines.

Your expertise on applications, software architecture, network, server and storage design is not expertise on building tier IV data centers with 99.995% uptime.

Likewise, experts on mission critical facilities like hardened data center buildings, data center power redundancy and cooling are rarely experts on mission critical systems and applications.

A best-of-breed CIO strategy would include expertise in both information technology systems design and highly available data center facilities. How is this done?

If your organization likes to “roll your own” enterprise data center, you probably hire design/build experts to help you accomplish your goals of high data center uptime. Although the capital costs associated with in-house data centers can be enormous, internal data centers offer the highest level of control.

If your organization is considering outsourcing the facilities disciplines, wholesale colocation offers a simple way to offload the “landlord” side of the data center without losing control of the systems.

It’s often best to outsource data center facilities when you’re great at IT but not so great at building data centers.

Midwest colocation facilities like Lifeline Data Centers offer F5 tornado resistant buildings,N+N power and cooling redundancy, and access to many telecom providers. Midwest data centers offer low power costs also give you peace of mind that you’ve done the best job at solving the data center downtime problem using an affordable colocation solution.

Are you trying to be an expert in both facilities and IT? Talk it over with the mission critical facilities experts.

(Source - http://www.lifelinedatacenters.com)

Sunday, November 27, 2011

Skype, Facebook Expand Video Chatting Capabilities

Posted on Thursday Nov 17th 2011 by Nicholas Kolakowski.

Having been integrated into Microsoft, Skype is now moving ahead with new Facebook integration and some new features for its Mac and Windows versions.

The latest versions of Skype for Mac and Windows now boast the ability to conduct Facebook-to-Facebook calls from within Skype. Starting such a call involves connecting the user's Skype and Facebook accounts, then selecting a Facebook friend with whom to chat.

"This new feature lets you maintain social connections with your Facebook friends and complements previously announced features such as being able to see when your Facebook friends are online," read a Nov. 17 posting on the official Skype blog.

Skype is also smoothing the video-rendering capabilities of Skype 5.4 Beta for Mac, and has added to Skype 5.7 Beta for Windows a group screen-sharing capability for any Windows users with a Premium subscription.

Microsoft purchased Skype for $8.5 billion earlier this year, turning the voice over IP provider into a business division headed by Skype CEO Tony Bates. Microsoft executives have repeatedly announced their intention to tightly integrate Skype's assets with Microsoft products, ranging from Xbox Kinect to Windows Phone, although support for "non-Microsoft client platforms" such as the Mac will apparently continue for the duration.

Microsoft ended up paying far more for Skype than its previous overlord, eBay, which had agreed in 2005 to pay $2.6 billion in cash and stock for the then two-year-old company. Four years later, a team of private investors-including Silver Lake Partners and Andreessen Horowitz-took it off the auction Website's hands for $1.9 billion in cash. Before the Microsoft acquisition, Skype had supposedly been raising money for an initial public offering, but that offering was delayed after the company appointed Bates to the CEO role in October 2010.

Microsoft also has a tightening relationship with Facebook, whose social-networking features (such as the increasingly ubiquitous "Like" button) have been incorporated into the Bing search engine.

Despite the massive Skype acquisition, most of Microsoft's recent corporate activity has centered on partnerships with Facebook, Nokia and the like. This spares Microsoft, despite its considerable financial reservoirs, from having to shell out billions on potentially risky takeovers; however, it also raises the specter of discordance in strategic aims between partners.

(Sources - http://mobile.eweek.com)

From Edison’s Trunk, Direct Current Gets Another Look

Thomas Edison and his direct current, or DC, technology lost the so-called War of the Currents to alternating current, or AC, in the 1890s after it became clear that AC was far more efficient at transmitting electricity over long distances.

Today, AC is still the standard for the electricity that comes out of our wall sockets. But DC is staging a roaring comeback in pockets of the electrical grid.

Alstom, ABB, Siemens and other conglomerates are erecting high-voltage DC grids to carry gigawatts of electricity from wind farms in remote places like western China and the North Sea to faraway cities. Companies like SAP and Facebook that operate huge data centers are using more DC to reduce waste heat. Panasonic is even talking about building eco-friendly homes that use direct current.

In a DC grid, electrons flow from a battery or power station to a home or appliance, and then continue to flow in a complete circuit back to the original source. In AC, electrons flow back and forth between generators and appliances in a precisely synchronized manner — imagine a set of interlocking canals where water continually surges back and forth but the water level at any given point stays constant.

Direct current was the electrical transmission technology when Edison started rolling out electric wires in the 19th century. Alternating current, which operated at higher voltages, was later championed by the Edison rivals Nikola Tesla and George Westinghouse.

The AC forces won when Tesla and Westinghouse figured out how to fine-tune AC transmission so that it required far fewer power plants and copper cable.

DC didn’t die, however.

AT&T adopted direct current for the phone system because of its inherent stability, which is part of the reason that landline phones often survive storms better than the electric grid.

And household appliances and much industrial equipment — everything from hair dryers to jet planes — are built to use DC. Embedded converters bridge the mismatch between the AC grid and the DC devices on the fly.

But those constant conversions cause power losses. For example, in conventional data centers, with hundreds of computers, electricity might be converted and “stepped down” in voltage five times before being used. All that heat must be removed by air-conditioners, which consumes more power.

In a data center redesigned to use more direct current, monthly utility bills can be cut by 10 to 20 percent, according to Trent Waterhouse, vice president of marketing for power electronics at General Electric.

“You can cut the number of power conversions in half,” Mr. Waterhouse said.

On a far smaller scale, SAP spent $128,000 retrofitting a data center at its offices in Palo Alto, Calif. The project cut its energy bills by $24,000 a year.

The revival of DC for long-distance power transmission began in 1954 when the Swedish company ASEA, a predecessor of ABB, the Swiss maker of power and automation equipment, linked the island of Gotland to mainland Sweden with high-voltage DC lines.

Now, more than 145 projects using high-voltage DC, known as HVDC, are under way worldwide.

While HVDC equipment remains expensive, it becomes economical for high-voltage, high-capacity runs over long distances, said Anders Sjoelin, president of power systems for North America at ABB.

Over a distance of a thousand miles, an HVDC line carrying thousands of megawatts might lose 6 to 8 percent of its power, ABB said. A similar AC line might lose 12 to 25 percent.

Direct-current transmission is also better suited to handle the electricity produced by solar and wind farms, which starts out as direct current.

In most situations, solar or wind energy has to be converted, and sometimes reconverted, into AC before it can be used. With HVDC, conversions can be reduced. DC grids can also more easily manage the variable output that occurs, say, when a storm hits or the wind dies.

In the United States, the Tres Amigas power station in New Mexico will use HVDC links to connect the nation’s three primary grids — the eastern, western and Texas grids. Ideally, it will create a marketplace where customers in New York and Los Angeles will be able to buy power from wind farms in Texas, which often have to dump power because of the lack of local demand.

HVDC Light, a version of HVDC invented by ABB in 1997 that is designed for shorter distances, has started to gain popularity because its cables are coated with extruded plastic. That allows cables to be buried underground more easily, avoiding some of the land-use hearings that have delayed proposals for above-ground AC transmission lines in the United States and Europe.

Direct current is also getting more attention at the level of individual buildings.

Nextek Power Systems, for example, has developed a system for delivering power via DC to lights and motion sensors through a building’s metal frame, instead of through wires.

Paul Savage, chief executive of Nextek, based in Detroit, understands why the public might view that notion with trepidation. But he said the current was not enough to electrocute anyone.

“If you licked your fingers you might get a little bubbly feeling, like if you put a nine-volt battery on your tongue, but it is not noticeable if you’re in a non-wet environment,” he said.

Of course, AC remains by far the dominant standard for electricity, and many are dubious about “DC is better” arguments.

Hardware for HVDC and other direct-current applications is expensive, so capital costs have to be recovered through efficiency. Google, never shy about experimenting with energy-saving technologies, has veered away from DC data centers, claiming that the capital costs do not justify the switch.

Still, sales and sales inquiries are climbing, DC advocates said. Just don’t expect Current War II, said Mr. Sjoelin.

“This is a complement,” he said. “We’re not going back to Edison.”

(Source - http://www.nytimes.com)

Thursday, November 24, 2011

Microsoft is officially looking into buying Yahoo

Yahoo and Microsoft sign a nondisclosure agreement, Microsoft officially looking into purchasing the aging internet company.

The Microsoft courtship of Yahoo has officially moved from rumor to confirmed. Microsoft and Yahoo have taken a very early step in the buying process. Microsoft and Yahoo have signed a nondisclosure agreement, which means Microsoft can now poke around at Yahoo’s financials to see if it wants to make an offer.

There have been several recent rumors of Microsoft trying to buy Yahoo, as well as several other possible suitors such as AOL. While other companies may be interested in buying Yahoo none of the other big name companies have signed a nondisclosure. Now that Microsoft is able to look over Yahoo’s books it can decide what exactly it wants to do moving forward.

It still isn’t clear if Microsoft will be buying Yahoo outright or if it will partner with equity firms and make an investment in the search giant. A couple of private equity companies have also signed nondisclosures with Yahoo, but their interest looks to be in making an investment rather than making a purchase.

There are several reasons why Microsoft should be interested in Yahoo, most notably would be search. Yahoo powers the sales behind Microsoft’s Bing search results, and we are sure Microsoft wouldn’t want someone else taking that over. It is also possible that Microsoft will want to intergrate Skype into some of Yahoo’s services, such as yahoo messenger.

No matter what ends up happening Yahoo needs some form of help, either from Microsoft or someone else. Even Yahoo’s iconic San Francisco billboard is feeling the crunch. This is only the first step in the process, and by no means is a for sure signal that Microsoft will actually buy Yahoo.

(Source - http://www.digitaltrends.com)

Sunday, November 20, 2011

Iran detects Duqu virus in Governmental System.

Iran said Sunday that it detected Duqu computer virus, which security players have debated is based on Stuxnet, believed to be aimed at sabotaging Islamic Republic's nuclear sites, according to a report.

Gholamreza Jalali, head of Iran's civil defense organization, told Islamic Republic News Agency (IRNA) news agency that computers at all main sites at risk were being checked and Iran had developed an antivirus software to fight the virus.

"We are in the initial phase of fighting the Duqu virus," Jalali said. "The final report that says which organizations the virus has spread to and what its impacts are has not been completed yet. All the organizations and centers that could be susceptible to being contaiminated are being controlled."

Word on the Duqu computer virus surfaced in October when security vendor, Symantec, said it found a virus which code was similar to Stuxnet, the cyberweapon discovered last year. While Stuxnet was aimed at crippling industrial control systems, security players said Duqu seemed to be designed to gather data so future attacks would be easier to launch.

"Duqu is essentially the precurson to a future Stuxnet-like attack," Symantec said in a report last month, adding that instead of being designed to sabotage an industral control system, the new virus could gain remote access capabilities.

Iran also said in April that it had been targeted by a second computer virus, which it called "Stars". It was not clear if Stars and Duqu were related but Jalali had described Duqu as the third virus to hit Iran.

(Source - http://www.zdnetasia.com)

Saturday, November 19, 2011

Siemon, Cisco, Intel and Aquantia team up to discuss 10GBASE-T adoption in the data centre

At a recent Emerging Technology Forum in Portland USA, experts from leading network infrastructure companies Siemon, Cisco, Intel and Aquantia addressed key advances and considerations in the trend towards increasing market adoption of 10 Gigabit Ethernet (10GBASE-T) technologies in the data centre.

Topics covered were key 10GBASE-T market drivers and projections, the evolution of server connectivity, decreasing power needs and cabling design options with 10GBASE-T, and others. This event offered actionable advice for networking professionals on critical 10GbE decision points across the data centre infrastructure.

Panel contributors included Dave Chalupsky, Intel Network Architect, Carl Hansen, senior product manager with Intel’s Data Centre Standards group, Carrie Higbie, Siemon’s global director of data centre solutions & services, Sudeep Goswami, product line manager of Cisco’s Server Access and Virtualization Business Unit and group chair for the Ethernet Alliance 10GBASE-T committee and Sean Lundy, director of technical marketing at Aquantia.

According to Siemon’s Carrie Higbie, category 6A and higher connectivity is being planned in new data centres, “85% of the new data centre designs we see are cabling for 10GBASE-T.” Higbie also noted a continuing upswing in the global use of shielded cabling for 10GBASE-T, including the traditional UTP dominant markets such as the US.

Siemon has been marketing and selling 10GBASE-T ready cabling since 2004 and now that 10GBASE-T equipment and power consumption is becoming more economical, the time has come for customers to take full advantage of their category 6A and higher cabling investment.

Among the event highlights were Aquantia’s Sean Lundy and Intel’s Carl Hansen and Dave Chalupsky providing insight on how chip innovations from their respective companies were expected to significantly drive down 10GBASE-T power requirements for more energy-efficient 10GbE networks. According to Lundy, “The current 40nm generation can already achieve power of a couple of watts for connectivity within the rack in data centres and will trend to 1 watt or less with energy efficient ethernet and migration to finer geometries. We have now achieved a power, area, density envelope that has enabled dual-port LAN on Motherboard (LOM). Between LOM and 48-port high density switching, in 2011, we will see the beginning of the hockey stick growth curve for 10GBASE-T”.

Regarding widespread commercial availability of 10GBASE-T equipment, Cisco’s Sudeep Goswami stated that Cisco is serious about 10GBASE-T and projected that the company’s flagship Nexus product family would join its Catalyst line in supporting 10GBASE-T in 2011.

(Reference - http://www.thedatachain.com)

How Do Health Information Websites Score on a 100-Point Customer Satisfaction Scale?


Data Point Image


Private-sector health information websites scored a 79 on a 100-point customer satisfaction scale, while health insurance websites scored a 51, according to a study by ForeSee, a customer experience analytics firm.


Public-sector health information websites scored a 78 on the scale, according to the study. The study defined public-sector health information websites as those maintained by the federal government and not-for-profit organizations. Meanwhile, hospital and health system websites scored a 78.


Two sub-categories of private-sector health information websites -- sites that included information about pharmaceuticals and health products -- both scored a 76.


The study also found that health information website visitors who give a satisfaction rating of 80 or higher say they are 127% more likely to use the site as their main resource for interacting with a health care organization.


Results are based on an analysis of 100,000 surveys conducted from August to September 2011.


Source: ForeSee, "The 2011 ForeSee Healthcare Benchmark"



Read more: http://www.ihealthbeat.org/data-points/2011/how-do-health-information-websites-score-on-a-100-point-customer-satisfaction-scale.aspx#ixzz1e7MvNDBv

Friday, November 18, 2011

The toilet, re-imagined: four water-saving designs.

Saturday is World Toilet Day, an annual awareness-raising campaign sponsored by the World Toilet Organization and aimed at improving santitation access for the 2.6 billion people around the world who lack it.

A number of NGOs are working to develop low-cost, low-power systems to address the public health dangers and environmental degradation that comes from poor sanitation in the developing world.

But there’s also a fair amount of innovation underway to improve the design and efficiency of the conventional flush toilets. Herewith, a quick survey of some of these re-imagined toilets.

More than dual flush

Old-school toilets can use as much as five gallons — five gallons! — with each flush. To reduce that obvious waste of a precious resource, a number of manufacturers are offering dual-flush toilets. One lever is pushed for a “number one” and the other lever — which sends markedly more water into the bowl — is used to flush poop.

But Caroma, an Australian manufacturer of commercial and residential bathroom products, has one-upped the dual flush toilet with its Profile Smart 305.

Yes, that is a sink you’re seeing, integrated into the toilet. Here’s the way it works. After you do your business, you use the sink to wash your hands. The sink uses fresh water, but that water is then stored in the tank as grey water. And then when the toilet is flushed, it uses the grey water instead of more fresh water.

The unit also includes a dual-flush, so it is already designed for efficiency.

But the use of the integrated sink only serves to boost the efficiency by eliminating the need to pump fresh water into the bowl.

And think about that for a minute: fresh, clean water, straight from a wastewater treatment facility, is pumped into the billions of flush toilets in around the world. There’s no reason for that and it not only wastes the water, it wastes the considerable energy that went into cleaning and delivering that water to the building in the first place.

But the ergonomics on this model are a bit funky. Seems like it would be easier to use if the sink was positioned perpendicular to the bowl…and maybe a was a tad bigger.

And that’s just what the Spanish company Roca did with its W+W (for wash basin and water closet) design. As with the Caroma, the water that goes down the sink’s drain is collected, filtered, and then used to flush the toilet. But with the sink facing out from the toilet seat, it has a more separated look and, I would imagine, feel. In fact, it wouldn’t feel odd to use the Roca basin for, saying, tooth-brushing.

The Roca model also uses an energy-saving faucet design. The handle always opens the faucet into a cold water position, so that the user can’t inadvertently create demand for hot water unless he or she really wants to.

The DIY approach

But replacing a perfectly good toilet with the a new one like the Caroma or Roca is a bit wasteful in its

own right. But fortunately there are ways to recycle grey water for your sanitary needs through retrofits. Kentucky-based WaterSaver Technologies sells a solution call AQUS which collects water from your bathroom sink and pipes it into the water reservoir in your toilet. The company says the sink doesn’t need to be located right next to the toilet for the system to work, because it can pipe the water across the room and into the toilet.

There are a couple down-sides to this approach, though. The holding tank eats up storage space under the sink, and the system needs to power to operate, so you’ll need a spare outlet.

If space is as great a concern as water savings, there’s this Yanko Design solution that combines not only the sink and toilet into a single unit, but tosses in a mirror, a mini table and a….espresso cup holder? That’s what this design appears to offer:

In any case, the centuries-old flush toilet design is getting an overhaul, with an eye toward water and energy conservation. And that’s a good thing.

Photos: Flickr/Britt Selvitelle, Caroma, Roca, WaterSaver, Yanko Design

(Reference - http://www.smartplanet.com)