Sunday, May 26, 2013

iPhone 6 rumor rollup for the week ending May 24

iPhone 6 rumor rollup for the week ending May 24
iPhone delays derail 4G rollouts, flexible Gorillas, rainbow phone

One of the iOSphere's enduring myths is that one or another component or production screwup has repeatedly delayed most iPhones, including the iPhone 5S or 6 or whatever.

The latest iteration of this myth is that some screwup with the alleged Next iPhone's fingerprint sensor, itself another unconfirmed feature, delayed the phone's release, causing British mobile carrier Vodafone to upend its 4G network rollout and postpone it until the end of summer. As is often the case, there is less to this than meets the eye.

Also this week: expect flexible Gorilla Glass screens on the next iPhones; you'll know it's a cheap iPhone by its colors; and new fan art reveals what impressive new specs for the Next iPhone might look like if the Next iPhone looked like what a guy with Adobe Photoshop thinks it will look like.

Britain's Vodafone has decided to delay the introduction of its 4G/LTE network -- originally planned for June -- because Apple has run into manufacturing delays and postponed the release of the iPhone 5S.

That's the theory anyway of a post at The Telegraph headlined "Vodafone delays 4G network until late summer for iPhone 5S" by Christopher Williams.

As is often the case with The Telegraph, the details are sketchy.

"The operator had originally planned to switch on superfast mobile broadband in cities in June, but said on Tuesday it would wait until August or September," Williams writes, in a posting whose confusing randomness seems to be the result of copying-and-pasting under deadline pressure to get something online in 15 minutes.

According to Williams, "The Silicon Valley giant [meaning Apple] had been expected to introduce an updated version, the iPhone 5S, that will work at the new frequencies in June, but it was reported last month it had been pushed back because of manufacturing delays." Yet he offers not a shred of evidence, not even a link to rumor sites, to support this assertion.

(Pocket-lint, another U.K. tech website, gives a more specific but no less confusing explanation for the iPhone 5S "delay": "a Vodafone-compatible iPhone was expected for this summer until rumours of fingerprint sensor issues allegedly stalled Apple's release." On its face, Pocket-lint is saying that the rumors stalled the release, but apparently intended to say that Apple delayed announcing the phone because it ran into some kind of problem with the phone's fingerprint sensor, yet another long-rumored feature.)

"Sources confirmed the delay to the iPhone 5S had been a factor in Vodafone's decision to delay 4G," Williams continues, again without any indication as to who or what the sources are.



Tuesday, May 21, 2013

What's next for Ethernet?

400G in the short-term; longer-term, expect Petabit Ethernet in 40 years

Internet traffic will quadruple in five years and the number of mobile Internet connections will exceed the world's population by 2017, according to Cisco research.

The number of Internet users will be a quarter billion greater this year than last and almost three times that of 2005, according to the ITU.

10 things you may not know about Ethernet

20 milestones in Ethernet's first 40 years

Bandwidth requirements in data centers keep rising to accommodate the growth in users and the service levels they demand. We’re seeing it now with the progression from 10G to 40G to 100G Ethernet. Soon, Gigabit Ethernet will go the way of Fast Ethernet.

But 20 years before the World Wide Web, Ethernet speeds were increasing by an order of magnitude just about every 10 years or less: 10Mbps in 1973-83 to 100Mbps in 1993, 1G in 1998, 10G in 2002 and 100G in 2013.

Does that mean we’ll see Terabit Ethernet in 2023? We’re already on the way.

The IEEE recently launched a study group to explore development of a 400Gbps Ethernet standard to support booming demand for network bandwidth.

Networks will need to support 58% compound annual growth rates in bandwidth on average, the IEEE claims, driven by simultaneous increases in users, access methodologies, access rates and services such as video on demand and social media. Networks would need to support capacity requirements of 1 terabit per second in 2015 and 10 terabit per second by 2020 if current trends continue, the organization says.

So even though 100G products are just starting to appear, it’s time to look into 400G, says John D'Ambrosia, chair of the new IEEE 802.3 400Gbps Ethernet Study Group and chief Ethernet evangelist, CTO office, at Dell.

"There's a tsunami in terms of bandwidth," D'Ambrosia says. "The iPhone didn't exist when we started 100G" Ethernet.

Increasingly, video is making up more and more content on the Internet. And more and more of that video is generated from mobile devices.
There's a tsunami in terms of bandwidth. The iPhone didn't exist when we started 100G Ethernet.
— John D'Ambrosia, CTO office, Dell

Social media site Facebook is now supporting billions of users vs. the tens of millions it had when 100G was first explored in 2006. The 100G standard was ratified in 2010. Four hundred gigabit Ethernet is expected follow the same timeframe and be ratified in 2017.

So in 2023-24 we could expect to see Terabit Ethernet ratified after a study group begins in 2019-20. And then 10T Ethernet 10 years after that; the 100T; and then Petabit Ethernet 40 years after Ethernet’s 40th Anniversary, and 60 years after Gigabit Ethernet.

“By 2053, you will have a Titan (supercomputer) in your living room,” said Huawei Enterprise COO Jane Li at last month’s Ethernet Technology Summit conference in Santa Clara. Titan, based at the Oak Ridge National Laboratory in Tennessee, is the world’s largest supercomputer.

Also by 2053, data centers will be running petabit-per-port networks and wireless LANs at 50Tbps, Li believes. She sees 10T Ethernet ports on data center switches and servers, and hundreds of gigabits on WLAN links in 20 years.

Video and Big Data will drive much of it, Li says.

“People want more and more the experience of being there by not being there,” Li says of video and virtual presence it can provide. “Facebook only represents the beginning of Big Data.”

More and more switching will be done on the processor itself, and clouds will become a utility grid, Li predicts. And then a new generation of sensors will usher in new applications to analyze the huge volumes of data they generate.



Saturday, May 18, 2013

ServiceNow wants to be the cloud for IT

Instead of being afraid of the cloud, IT shops should embrace, and control it, ServiceNow says

Many enterprise IT shops may be reluctant to jump head first into cloud computing. After all, there are a variety of concerns that come with using the cloud, from security to integrations with existing systems, and perhaps most scary: What the cloud will mean for your IT job.

But cloud services are being used within organizations with or without the blessing of IT. ServiceNow has a solution for this dichotomy, a sort of onboarding process to get IT comfortable with using the cloud, while enabling functionality to business end users. ServiceNow says what better department to start with using the cloud than IT itself. IT shops get a first-hand look at how the cloud is used, what it’s good for and what it’s not.

ServiceNow has targeted IT service management applications as the first stop for off-loading apps to its cloud. Based on ITIL best practices, the platform provides a way for IT to manage incident reporting and response, change requests and troubleshooting.

But recently the company is moving to support other broader IT functions like IT operations management and regulatory and compliance issues in ServiceNow’s cloud platform. It’s even expanding into use cases for not IT-business units within an enterprise to use the cloud, such as creating customized workflow apps in a simple process that doesn’t require coding.

SErvicenow

A view of the SerivceNow dashboard on the companyĆ¢€™s new iPad app. ServiceNow provides a range of services, including allowing IT to become a broker of cloud resources and a new app creator tool.

The idea is for ServiceNow to be the single console where IT managers and business units consolidate their applications into a platform that spans across the organization. ServiceNow isn’t going to replace SalesForce.com, or other big-time enterprise apps, its executives say, but it can help simplify the dozens, hundreds or sometimes thousands of apps used by business units.

ServiceNow seems to be catching on. Revenues for the company have more than doubled year-over-year, and this week at its customer conference, Knowledge ’13 in Las Vegas, the company attracted almost double the number of users from last year, up to 3,800. It also rolled out a few major enhancements to its offerings that are meant to give IT greater control in becoming a broker of cloud services and for organizations to use the ServiceNow cloud to build customized applications. Announcements included:

ServiceNow Cloud Provisioning
This new feature allows end users to self-provision cloud resources on multiple types of clouds, including both Amazon Web Service’s Elastic Compute Cloud, and VMware-powered clouds, while having it be controlled by IT. Users can request and provision from a catalog of cloud-based resources that is established by IT. Central IT shops can customize parameters of use, such as how long a resource, such as a virtual machine, is active and what type of information about the VM’s use is saved.

The provisioning tool sits on top of a tool like VMware vCenter, which actually handles the under-the-covers VM provisioning through its hypervisor. The cloud provisioning tool is meant to sit above that software as a way to manage multiple cloud resources, and user functionality through a single management console. IT is no longer an inhibitor to cloud use by the company, but a broker.

Application creator
One of the other main features of ServiceNow’s cloud is the ability to create customized applications on the platform. Initial iterations of this were geared towards IT workers, who could create apps to manage a range of tasks. One app could automate the setup process for onboarding a new employee, for example. ServiceNow’s App Creator, launched this week, aims to make that process of creating customized apps easy for anyone in the business.

It uses information already in the ServiceNow cloud to create databases, tables, charts, graphs and workflows. These can then be adjusted and rearranged, populated with new information or selectively deleted to create new workflow automation apps. It doesn’t require the ability to code, but there are options to program in JavaScript.

ServiceNow says more than 1,000 apps have already been created by IT pros using its cloud. At the conference this week ServiceNow highlighted Target, which created an app within ServiceNow’s cloud that directs customer service requests in its stores directly to the person best suited to handle the problem. “This is not an app core to IT, it’s for retail satisfaction management,” says ServiceNow CTO Arne Josefsberg. Many of these line of business apps are similar to apps IT departments would create: There’s a problem, so a workflow is created to manage the resolution. These apps can automate that process. As another example, GE used the platform to manage its field personnel doing service calls on wind turbines, Josefsberg says.

ServiceNow iPad app
Through a new HTML5 application, IT workers can now control their IT services through a touch-screen-enabled app. Basically, anything that can be done through the ServiceNow web portal can now be done through the iPad app.

The moves are part of a broader transformation at ServiceNow during the past few years. The company, which was founded in 2004 and is based in San Diego, has been refining its strategy and brought on a new executive team. Frank Slootman, a former VC executive whose company Data Domain was purchased by EMC, was brought on as ServiceNow’s new CEO in 2011, along with a handful of other top-level executives who migrated from EMC as well. Attempting to build up its cloud chops, that same year ServiceNow added Josefsberg, the former GM of Microsoft Azure’s infrastructure. Alan Leinwand, the former infrastructure chief at Zynga, who helped build one of the most advanced hybrid clouds at the time, recently came on board to ServiceNow as well.

The cloud is an enormous opportunity for many businesses, Josefsberg says, but the tools for IT departments to truly get a handle of how it can be rolled out across the enterprise have not yet been readily available. That’s what ServiceNow is trying to change.




Tuesday, May 14, 2013

How Facebook developers screwed up Facebook Home

Had the Facebook developers working on the Android app been familiar with Android, maybe Facebook Home would have been more successful.

Facebook Home, the product of Facebook's work to put its social network at the front of the Android mobile operating system, was reportedly built by a team of developers who were not familiar with Android.

TechCrunch reported today that "some of the Facebookers who built and tested Home normally carry iPhones." That explains why Facebook Home has been so poorly received among Android users, many of whom have criticized changes to important Android features like widgets, docks, and app folders that were buried beneath the Facebook Home interface.

The community of native iOS users at Facebook is partially the fault of the company's management, which has issued iPhones to its employees for years. However, the company apparently tried to diversify its developers' mobile trends sometime last year, TechCrunch reported last November. Facebook has been hanging posters around the campus encouraging employees to "droidfood," a play on the term "eat your own dogfood," which means to use the technology you're working on.

While receiving criticism, Facebook Home has also been commercially stale, attracting just 1 million downloads in its first month on the market. That figure is largely seen as a disappointment when compared to the Instagram for Android app, which reached 1 million downloads in its first day on the market and 5 million in less than a week.

Meanwhile, the HTC First, the only smartphone that came with Facebook Home pre-installed, has been reportedly discontinued by AT&T, according to Boy Genius Report.



Best CCNA Training and CCNA Certification and more Cisco exams log in to examkingdom.com

Networks in 2020: More traffic, less energy

The GreenTouch industry consortium says new technologies could cut power consumption by 90 percent

Networks could use far less energy by 2020 even though they'll be carrying much more traffic, an industry group says.

The GreenTouch consortium, formed in 2010 to speed up progress on more efficient networks, says it has identified technologies that together could cut network power needs by 90 percent even in the face of rapidly growing data demand. The group of equipment vendors, component makers and service providers will present that conclusion in a report due in mid-June.

"There is potential with these new technologies to support the traffic growth and still make the energy consumption go down," said Thierry Klein, chairman of GreenTouch's technical committee. Klein also leads green research at Alcatel-Lucent's Bell Labs division.

The tools that make this possible include new devices, components, algorithms, architectures and protocols, Klein said. All have been proved in labs, he said. The potential energy savings represents a comparison between a 2010 network with that year's traffic levels and a theoretical 2020 network with projections of traffic amounts for that year.

"If you were to use all of those things together, this is the overall potential," Klein said. GreenTouch is working on other technologies that could drive even greater efficiency but weren't proven enough to include in the report, he said.

GreenTouch won't ship any products itself, but rather is helping to bring carriers and vendors together to find ways to reduce power consumption. When Alcatel-Lucent announced the formation of the group, it had 10 members, and that list has since grown to 50, including Huawei Technologies, Fujitsu, Samsung, Vodafone, China Mobile and numerous universities.

However, some of the biggest names in carrier networking, including Ericsson, Cisco Systems and Nokia Siemens Networks, aren't part of GreenTouch. Their absence could represent a missed opportunity for even more progress on green networks, according to Saverio Romeo, an analyst at Frost & Sullivan.

GreenTouch's findings are promising and could become real, but there are too many different green-network initiatives in play today, Romeo said. Broader efforts under standards bodies such as the International Telecommunication Union (ITU) or Institute of Electrical and Electronics Engineers (IEEE) could lead to even more power gains, he said.

"We are missing out on even greater efficiency ... if there is a lack of cooperation between these various activities," Romeo said.

However, most carriers will have plenty of motivation to invest in higher efficiency over the next several years, Romeo said. Along with growing demands for capacity, many carriers are facing flat revenue, he said. That will make the energy bill an obvious target for savings.

GreenTouch's Klein also believes carriers will want to invest in the new technologies, even though most of the features will require new equipment. "A dollar spent on energy is a dollar wasted today," Klein said.

If network gear doesn't change, it will take much more energy to carry the amount of traffic users will produce by 2020, GreenTouch says. The conclusions of its study are based on forecasts that the amount of traffic on wireless networks will have multiplied by 88 times between 2010 and 2020, while wired access networks will grow about 10 times more busy and wired core networks will see traffic multiply by eight times.

One problem with current networks is that most of them are always on, even when not needed.

"For the most part, the energy consumption of the equipment is at the peak power, or very close to the peak power ... even when there is no load," Klein said.

GreenTouch has identified ways to solve that problem by making networks more adaptive, so components or entire systems can be shut down when not needed. This is similar to what vendors have promised is possible with server virtualization, where cores or systems could be turned off during periods of low demand. It effectively turns network resources into Lego blocks that can be added or removed as needed, Klein said.

"Even at the subsecond level, I can turn some of the equipment on and off very fast, and I save energy when I don't need all the bandwidth to handle the traffic," Klein said.

Wireless networks are the least efficient, according to GreenTouch. The technologies it has already identified could cut wireless networks' power consumption by 1,043 times, the group said. Shifting traffic from macro cells on towers to small cells indoors or at street level is one way to do this. However, that could leave even more pieces of equipment up and running without any traffic at some hours. The Lego-block approach, applied to wireless, could power down individual small cells or even change the power level of an antenna as needed, he said.

Wired access networks could be made 449 times more efficient and core networks could see a 95x gain by 2020, the group said. The optical technology that's widely used in wireline networks makes them more efficient than wireless already. Other technologies that could help to drive efficiency in the next seven years are Bi-PON (bit-interleaved passive optical networks), content caching and separation of the control and data planes of the network, according to GreenTouch.



Best CCNA Training and CCNA Certification and more Cisco exams log in to examkingdom.com

Wednesday, May 8, 2013

The 7 elements of a successful security awareness program

The 7 elements of a successful security awareness program

When we were asked to keynote a recent CSO event, it was a pleasant surprise that the top concern of the CSOs was "security culture." From performing many security assessments and penetration tests, it is sadly obvious that even the best technical security efforts will fail if their company has a weak security culture. It is heartwarming that CSOs are now moving past straight technological solutions and moving towards instilling a strong security culture as well.


To determine the components of a truly successful security awareness program, we performed a study to identify critical success factors for building one. We interviewed security awareness practitioners at Fortune 500 companies and surveyed the security staff and general employees at the companies. Additionally, we validated the results and gathered additional information at a security executive event in the United Kingdom with more than 150 security executives participating.

While there are many more lessons to be learned, what follows are the 7 most notable habits we found that lead to successful security awareness programs.

Counterpoint: " Why you shouldn't train employees for security awareness," by Dave Aitel of Immunity Inc.

1. C-Level support
Awareness programs that obtain C-level support are more successful. This support inevitably leads to more freedom, larger budgets and support from other departments. Anyone responsible for running a security awareness program should first at least attempt to obtain strong support, before focusing on anything else.

Yes, getting this level of support can be difficult, but our research also found best practices on how to obtain this support. Successful efforts frequently highlighted that security awareness was required for compliance and that awareness efforts provided a return on investment that will inevitably save the company money. They also created special materials specifically for upper-management, such as newsletters and short articles that highlighted relevant news and tips that were specific to executives.

2. Partnering with key departments
Successful awareness programs found a way to involve other departments, such as legal, compliance, human resources, marketing, privacy and physical security. While it is easier to get this support if you have the C-level support, these departments frequently have mutual interests and might be amenable to providing additional resources, such as funding or distribution. Frequently, these departments can make security awareness efforts mandatory. For example, the legal and compliance departments carry a great deal of influence throughout the organization and can make security awareness a required component of other processes, such as new hire indoctrination.

To obtain this support, you might have find that you have to incorporate the needs of the cooperating departments with the general security awareness efforts. For example, you might suggest that you can use a security awareness newsletter to include compliance content. If it gets you the support you need, the effort is definitely worth the trouble.

3. Creativity
Creativity is a must. While a large budget helps, companies with a small security awareness budget have still been able to establish successful programs. Creativity and enthusiasm can make up for a small budget. An example of creativity includes the use of a security cube during a company event. The security awareness department set up a mock cubicle, with 10 common security violations, in the main hallway. Employees who could identify all 10 violations were entered in a prize drawing. Another effort included giving out boxes of chocolates that included the security policy document, on Valentines Day. Employees reported that they felt compelled to read the document, because they liked the chocolate. These are just examples, but clearly there are an unlimited number of options.

4. Metrics
One of the key factors in having a successful effort is being able to prove that your effort is successful. The only way to do this is to collect metrics prior to initiated new awareness efforts. Without having a baseline, it is hard to demonstrate that your efforts had more than assumed success.

The metrics can include surveys on attitudes. They could also include the use of phishing simulation tools to include pre and post awareness training. You can also examine the number of security related incidents, such as attempted visits to banned websites. When you can show measurable improvements in any aspect of security, you can justify your program, and obtain additional funding and support. Just about every department in a company has to prove their value, and security should not expect to be an exception.

5. Department of how
Awareness efforts that focus on how to accomplish actions are more successful than those that focus on telling people that they should not be doing things. Clearly there are actions that should not be allowed, but those should be the exceptions and not the rule. For example, it is not realistic that you can tell employees that they should not be on social networks, but it would be useful to them if you tell them how they can be on social networks safely.

6. 90-day plans
Most security awareness programs follow a one-year plan. Those plans also attempt to cover one topic a month. This is ineffective, as it does not reinforce knowledge, and does not allow for feedback or to account for ongoing events. Programs that relied on 90 Day plans, and reevaluated the program and its goals every 90 Days, are the most effective. The most successful program focuses on 3 topics simultaneously that are reinforced regularly throughout the 90 Days. Every 90 days, the program is reevaluated to determine what topics need to be addressed moving forward.

7. Multimodal awareness materials
The most successful programs are not only creative; they rely on many forms of awareness materials. While there is a potential place for learning management system training modules, too many programs rely on them completely as an awareness program. Successful programs incorporate a variety of awareness tools. This includes newsletters, posters, games, newsfeeds, blogs, phishing simulation, etc. The most participative efforts appear to have the most success.

Another issue to consider is that materials should attempt to connect with different generations. For example, some videos seem to connect best to young males. You then need to use other videos or materials that connect with older employees and females. There is definitely no such thing as "One Size" security awareness.

Conclusions
There were many more habits that led to either success or failure of security awareness programs, but these are a starting point as to where you should begin. The big takeaway is that habits drive security culture, and there are no technologies that will ever make up for poor security culture. Awareness programs, when properly executed, provide knowledge that instills behavior. Security should definitely be common sense, but you cannot have common sense without providing common knowledge.





Wednesday, May 1, 2013

Brocade unleashes a data center barrage

New and enhanced hardware and software designed to tightly couple the physical with the virtual

Brocade this week extended its data center networking portfolio with hardware and software enhancements designed to better integrate and align physical and virtual resources.

For virtual networking, Brocade rolled out the vRouter virtual router, obtained from its recent acquisition of open source networking software company Vyatta; and the Virtual ADX Application Delivery Switch. For physical networking, the company unveiled new modules for its MLXe core router and NetIron Carrier Ethernet switch, as well as updated operating system software for that product.

Brocade also announced an OpenStack plug-in and an Application Resource Broker for data center orchestration and management.

The Brocade Vyatta 5400 vRouter is software for highly virtualized data centers. It is designed to enable the configuration of multitier networks that can be deployed, configured or changed on demand. Brocade Vyatta vRouter is already deployed in Amazon Web Services, and supports VMware, Microsoft, Citrix and Red Hat hypervisors.

Release 6.6 of the vRouter includes support for multicast routing and dynamic multipoint VPN (DMVPN), for secure transmission of content to selected end-points.

Brocade Virtual ADX is designed to increase the speed of application resource and services deployment for cloud environments. The software controls application management and provisioning via the SOAP/XML API, enabling integration with third-party or homegrown orchestration and automation tools, Brocade says.

That API, along with support for OpenScript, allows for programmatic control of Layer 4-7 functions in a virtualized infrastructure, the company says. Virtual ADX is also intended to simplify orchestration of the application delivery network, and provide the ability to validate, test and replicate production or QA environments on demand.

For physical networking, Brocade rolled out 40G Ethernet interfaces for its MLXe core router, higher-performance modules for the NetIron CER Routers and expanded SDN capabilities in the NetIron OS.

The new four-port 40G Ethernet module for the MLXe features wire-speed performance for connecting with Brocade VDX/VCS fabric switches to construct an end-to-end, multitenant 40G data center. It also allows the router to support 128 40G ports per chassis.

The 40G-enabled MLXe will go up against Cisco's Nexus 7000 and 6000 switches, and Catalyst 6500 with 40G interfaces; Juniper's EX9200 and QFabric switches; HP's new 12900 and 11900 switches; and those from Dell, Extreme, Huawei, Alcatel-Lucent and other Ethernet switching combatants. It may also soon face core 40G competition from Arista Networks.

For smaller data centers that are integrated into Carrier Ethernet networks, Brocade's new four-port 10G modules for the NetIron switches are designed to extend the reach of Carrier Ethernet and enable rapid deployment of new services at the network edge.

The updated operating system software for the NetIron switches enhance high-performance routing and SDN capabilities, Brocade says. The new release supports OpenFlow Hybrid Port Mode technology, to help customers simultaneously deploy OpenFlow and traditional routing on the same port for a migration path to SDN.

Hybrid Port Mode is designed to enable customers to optimize specific data flows using OpenFlow without disrupting the existing production traffic. The new software also features support for multitenant data center environments to improve cloud service delivery and enforce tighter service level agreements between customers sharing the same cloud infrastructure.

For orchestration of the physical/virtual data center and cloud, Brocade released the first in a series of OpenStack plugins for its products. The initial release, for its VDX fabric switches, enables customers to include the Brocade fabric technology in an OpenStack-managed data center/cloud environment where administrators can provision and decommission pools of compute, networking and storage resources on-demand.

OpenStack plugins for Brocade's ADX, Virtual ADX, Gen 5 FibreChannel and Vyatta vRouter will be available in the second half of this year.

Also for provisioning, the updated Brocade Application Resource Broker is designed to automate the rollout of new services and adapt to changing business conditions. The update enables hybrid cloud services, as well as business continuity across globally distributed data centers for disaster avoidance, Brocade says.

Taken together, the new hardware and software is intended to give incumbents and startups alike with a contender in the data center fabric and software-defined networking arena. Forty gigabit Ethernet will stretch the boundaries in physical density, while features like an OpenStack plugin and OpenFlow forwarding will enable programmability in clouds and SDNs.




Control and security of corporate open-source projects proves difficult

Sonatype's annual survey of 3,500 software developers and shows struggle in setting corporate policy on open source and enforcing it

Open source has become a staple for software development in the enterprise, but keeping track of it and maintaining security for it remains an elusive goal, according to a survey of more than 3,500 data architects and developers published today by Sonatype, which provides component lifecycle management products and also operates the Central Repository for downloading open-source software.

In spite what is clearly considerable open-source usage -- for example 80% of a typical Java application is now assembled from open-source components and frameworks -- 57% said their companies "lack any policy governing open-source usage" and 76% indicated lack of meaningful controls related to software typically obtained at no cost though licensed.

When asked about how well their organizations control which open-source components are used in software development projects, 24% did say, "We're completely locked down: We can only use approved components." However, 44% answered, "Yes, we have some corporate standards, but they aren't enforced," and 32% said, "There are no standards. Each developer of team chooses the components that are the best for their project."

When asked about whether their company's open-source policy addressed security vulnerabilities, 24% answered, "We must prove that we are not using components with known vulnerabilities." But the remainder of the respondents indicated a weaker effort on security, saying they simply had a policy to avoid known vulnerabilities or their policy does not address security vulnerabilities.

Another survey question asked, "How would you characterize your developers' interest in application security?" To that, 40% of respondents indicated it's a top concern and they spent a lot of time on it. But 29% answered, "We do what we have to do, but this is the security group's responsibility" and 26% said, "We know it's important but we just don't have the time to spend on it." And 6% even flat out said, "It's just not something we're focused on."

According to Sonatype, more than 25% of the survey's respondents claim to have more than 500 developers in their organizations; some participants in the survey included Netflix, HSBC, FedEx, Disney, Goldman Sachs, Barclays, eBay, GE, Alcatel-Lucent, RSA, Facebook and LinkedIn, according to CEO Wayne Jackson.

When the 3,500 survey respondents were asked what are the biggest challenges in their company's open-source policy, the main reasons listed were "no enforcement," "it slows down development" and "we find out about problems too late in the process."

When asked who in the organization has primary responsibility for open-source policy and governance, 36% ascribed that role to "application-development management," 14% to "IT operations," 16% to legal, 13% to an open-source committee or department, 7% to security, 7% to risk and compliance and 7% to "other."

When asked about whether policy restricted component usage based on specific license or license type, 20% said their policy did not. The remainder said "yes," with 29% indicating they examined every component but not its dependencies, and 51% saying they examined all components and dependencies.

When asked if their organizations maintain an inventory of open-source components used in production applications, 35% said yes, 45% said no, and the remainder said "yes, for all components but NOT their dependencies."

"Developers are acknowledging that components make up a large part of their application development." While there's still a lot of custom code written in C, for example, for Web applications, he says, the adoption of open source is now a way of life for both the enterprise and vendors, Jackson said.

But challenges remain in adequately tracking open-source usage and any flaws that identified by the open-source community, especially in the large libraries that have become foundations of application development that widely used. "Finding a flaw in a library is not much different than finding a flaw in an operating system," Jackson concluded.