Quantcast
Channel: CodeSection,代码区,网络安全 - CodeSec
Viewing all 12749 articles
Browse latest View live

Why you should consider hapi

$
0
0

Why you should consider hapi

When starting a new web application, your first decision is the platform: node, Go, Rails, etc. Your second decision is the framework. When it comes to node, there are plenty of great frameworks to choose from .

Here is why hapi should be at the top of your list.

From the early days of node, hapi was the first enterprise-grade solution. Originally developed to support Walmart Black Friday scale, hapi retained that reputation thanks to its proven track record. You probably use at least one hapi-powered web application every day without even knowing it.

Quality

On practically every measurable quality metric, hapi scores at the very top.

Code Readability

I’ll start with what I consider to be the most important measure of quality ― code readability . If you pick a node framework and cannot follow the code with ease, that’s a red flag. Code readability correlates directly to what should matter most: simplicity, security, and maintainability .

It makes all the difference in the world when something goes wrong and you need to figure out what. Almost every major issue found over the past few years was reported with a solution because the code is that easy to work with. When readability and performance are in conflict, I will always choose readability. Machines keep getting faster and cheaper. Humans only get slower and more expensive.

Because the code is not micro-optimized to squeeze every bit of unnecessary performance , changes do not require complex, messy solutions. hapi will never be the fastest framework because avoiding complexity is a mantra I repeat to myself with every line of code I write or review.

Dependencies

hapi was the first (and still the only) framework without any external code dependencies. (It has one external dependency on a static JSON data file for mime types). Every code dependency is managed and controlled by our small core team and the final integrated solution is controlled solely by me.

I personally (and manually) review every single line of code that goes into hapi (excluding node itself). I review every pull request on every dependency regardless if I am the lead maintainer.

When you ‘npm install hapi’, every line of code you get has been verified. You never have to worry about some deep dependency being poorly maintained (or handed over to an unknown or sketchy individual).

If you followed recent new s, this is a big deal.

Code Coverage and Style

hapi was the first node project to require 100% code coverage . When existing code coverage tools where not good enough, we wrote our own. We were among the first to define a strict coding style enforced in CI. We keep revisiting our style as new, better approaches are developed and then move the entire code base to the new standard proactively.

Open Issues

The entire framework consists of 27 modules. Excluding joi which is its own (validation) framework (and mime-db which is a static data file), the rest of the modules combined have 6 open pull requests, 9 open reported issues, and 19 open feature requests or questions.

These are mind blowing low numbers, especially considering hapi is the second most used node framework with an average of over 1.3 million monthly downloads. Few other projects maintain such low numbers of open issues. This isn’t easy ― it takes significant effort to achieve.

Security

I might not be thrilled with the final result of the OAuth specification I co-authored, but I have made sure to build hapi with the strictest security-first approach. To this day, it is the only framework that can claim that. In hapi, security is never an afterthought or an add-on. It is central to the way we do everything.

Code Hygiene

From how the code is managed, controlled, and distributed, we always choose the most secure option available. Every contributor must have 2FA enabled on both GitHub and npm. Every publish must use 2FA. The core framework comes with a shrinkwrap file that specifies the integrity hash of every dependency. And soon we will automate the process of ensuring the code in GitHub matches the packages on npm.

Secure Defaults

Every default is always the most secure option available. hapi blocks error messages that may leak information or echo back exploits at multiple levels. Server load is protected by default with payload limits and request timeouts. And as more standards become available, new security headers are added to the core framework.

Integrated Architecture

On the application side, hapi provides the most comprehensive authorization and authentication API available in a node framework. Request authentication is a core part of the request lifecycle, not some middleware you throw in (hopefully correctly and in the right place and order).

And for the super security conscious of us, hapi provides advanced features such as encrypted and signed cookies and secret or key rotation. There are no possible excuses for building insecure applications.

Developers First

Every hapi feature is built with the application developer in mind. When adding new features, I always ask myself if this makes things more or less intuitive. At the end of the day, the question I care most about when talking to hapi users is “are you happy?”

Not everyone is going to agree with every choice I make, and there are many different valid approaches to building web applications. But rarely has someone invested in hapi to later regret that decision. I keep asking.

In survey after survey, hapi developers are among the very top on satisfaction. This is one metric I have no problems bragging about.

Predictability

One of the main reasons I built hapi instead of using Express was the total lack of predictability. The order in which you added routes, middleware, or called server-level methods dictated the outcome, often with hidden side effects. As projects got bigger, things broke on a daily basis when one middleware stepped over another or when a route path pattern overlapped with an existing route.

hapi was the first node framework (and still one of the very few, if not the only) to provide strong guarantees, empowering large distributed teams to work together on a common code base.

If you load a plugin that requires other plugins, it can explicitly specify it. If you are adding extension points, you can specify their relative order so that future extensions will not disrupt the existing balance. Route paths will never conflict and will always result in the same priority order no matter what order they are added.

Extensibility and Customization

hapi practically invented node framework plugins, the request lifecycle, server methods, and user extensions. It has the most mature and complete set of extension points at every step including authentication, authorization, and validation. You never have to hack your way around the framework and the internals are never a mystery. Every step is clearly documented.

In hapi, everything is properly namespaced which makes extensions safe and easy to use. You never have to worry about your application failing in production because of a runtime conflict between two extensions or plugins. Everything is validated in load-time for easy identification of conflicts during development.

The Rest

These are just the highlights. I didn’t even talk about the configuration-centric approach. The built in validation in and out. The caching support. The rich plugins ecosystem. Using the very latest JS language features…

But I do want to give a shout out to the hapi community. I have never been part of a better, more supportive and protective group of people. From answering questions on GitHub to offering help on Slack, the hapi community has some of the nicest, smartest folks around.

So for your next node project, give hapi a look. I think you are going to like it.


See Where You Place in the PAM Maturity Model

$
0
0

Many companies aren’t sure how to begin their PAM implementation or which security activities have the most impact on their goals. To help you stay on course, Thycotic has developed the first PAM Maturity Model , based on industry best practices that systematically lower privileged account risk, increase business agility and improve operational efficiency. The model gives you a strategic road map for PAM adoption so you can plan ahead and prioritize resources and budget. (Scroll down to see how you rate in PAM maturity.)

Apply lessons from the PAM Maturity Model to your cyber security strategy

We know PAM isn’t a simple fix and the approach to PAM isn’t the same for everyone. Our mission is to help you become a self-sufficient security champion so you can ascend the PAM maturity curve at your own pace. You can apply lessons from the PAM Maturity Model to your cyber security strategy regardless of the size of your company, your industry or the number and type of privileged accounts you need to secure, based on your own risk drivers, budget, and priorities.

Step-by-Step Road Map

The PAM Maturity Model defines four phases of maturity organizations typically progress through as they evolve from laggards to leaders in their adoption of privileged account management.

Phase 1. Analog: Organizations in the Analog phase of PAM maturity have a high degree of risk. They secure their privileged accounts in a limited way, if at all. As a result, they often provide excess privileges to people who don’t need them, share privileges among multiple administrators, and neglect to remove privileges when users leave the organization or change roles. Phase 2. Basic: When organizations progress from the Analog stage to the Basic stage of PAM maturity, they adopt PAM security software and begin to automate time-consuming, manual processes. Phase 3: Advanced: As organizations move from a reactive to a proactive privilege security strategy they enter the Advanced phase of PAM maturity and PAM becomes a top priority within their cyber security strategy. Organizations at this level are committed to continuous improvement of their privileged security practices. Phase 4: Adaptive Intelligent: As organizations ascend to the ultimate stage of PAM maturity they take the concept of continuous improvement to a higher level, often relying on artificial intelligence and machine learning to collect information and adapt system rules. They fully and automatically manage the entire lifecycle of a privileged account, from provisioning to rotation to deprovisioning and reporting.

The Maturity Model is based on security industry best practices and Thycotic’s work with 10,000 customers of all types, ranging from organizations beginning to experiment with PAM to the most experienced and advanced PAM users. Within the four major maturity phases there are gradations of PAM maturity which impact cyber risk, business productivity, and cost of compliance. In addition to accounting for specific security activities mentioned above, the model also reflects the frequency and scale at which organizations conduct those activities.

5 Minutes to Find Out Your Maturity Score

Based on the Model, the PAM Maturity Assessment is a free online tool that helps your security and IT teams prioritize security activities and align budget and resources. Take five minutes to answer just 11 questions. You’ll receive a score indicating your Maturity Phase and a customized report with detailed recommendations on how to ascend the PAM maturity curve.

How Mature Are You?

Take the PAM Maturity Assessment to Find Out . You’ll find a printable PDF of the PAM Maturity Model on the same page.

Understanding the 7 different types of data breaches

$
0
0

Every day more than 6 million data records are compromised , with no organisation or sector immune. Organisations are facing a data breach war so it’s imperative that ‘know your enemy’ becomes part of their battle tactics.

Data breaches come in various forms and sizes not all incidents are caused by sophisticated cyber attacks. To help you understand what your organisation is facing, here’s a breakdown of some of the most common breach types.

Employee negligence/error

Something as simple as including the wrong person in the Cc field of an email or attaching the wrong document to an email could cause a data breach. We’re all guilty of making mistakes it’s human nature but employees need to understand the most important elements of information security, and all staff, technical or not, need to be made familiar with security awareness policies and procedures.

Cyber attack/criminal hacker

The ways in which cyber criminals try to gain access to your systems are becoming more sophisticated. Often it isn’t always obvious that an attack has taken place until significant damage has been done. Cyber attacks can come in various forms, including denial of service, malware and password attacks.

Unauthorised access

Access controls are designed to stop certain information from being seen by the wrong people. A breach of these controls means that someone has gained unauthorised access to sensitive data, such as bank details stored by HR, or potentially compromised business critical information.

Physical theft/exposure

Although there is a lot of emphasis on the digital aspects of a data breach, physical exposure or theft of data is an equally important threat that organisations must consider in their security plans. This type of data breach can be caused by improper disposal of sensitive information, or simply leaving a confidential document in plain sight.

Ransomware

Ransomware is a type of malicious program that demands payment after launching a cyber attack on a computer system. If the organisation fails to comply with the extortion, the program threatens to destroy its essential data although there’s no guarantee that it will regain access to its data even after paying up.

Insider threat

Your employees know how your organisation operates, how vital information can be accessed and the measures in place to protect it, which is why you should put in place appropriate training and security protocols.

Phishing

Emails are a common part of our daily lives, making them a popular attack vector for cyber criminals. Crooks might adopt the seemingly legitimate credentials of such organisations as insurers, banks, etc. to gain access to your personal information by encouraging you to click an unsafe link or download a malicious attachment.

Are you prepared for a data breach?

The data breach war is a reality for all organisations, and the list above highlights just a few of the threats that you need to prepare for. Moving forward, your organisation must continually assess, update and improve its defence measures. That journey will be a long one; Vigilant Software can help you start, maintain and upgrade your cyber security and privacy management measures.

Become and stay secure

Our portfolio of products are all Cloud-based; as such, they are easy to integrate and are designed to support your organisation’s ability to become and remain secure. They also help your organisation meet relevant laws and regulations. Our tools vsRisk Cloud , the Data Flow Mapping Tool , the DPIA Tool and Compliance Manager help you to identify your legal requirements, understand the data you process and conduct information security risk assessments in line with international best practice.

Find out more

To learn more about our range of tools and protecting your organisation from a data breach, watch our short introductory videos: vsRisk Cloud , the Data Flow Mapping Tool , the DPIA Tool and Compliance Manager .

To request a demonstration of any of our tools, please click here .

Pivot3 Delivers Policy-Based Security for Hybrid Cloud Solutions

$
0
0
Pivot3’s expanded Intelligence Engine capabilities enable customers
to streamline security and regulatory compliance across the edge, core
and cloud

AUSTIN, Texas (BUSINESS WIRE) Pivot3 ,

the hyperconverged infrastructure (HCI) performance and technology

leader, today announced new policy-based security management

capabilities in its core Intelligence Engine. These expanded

capabilities allow organizations to automate and simplify the process of

protecting sensitive data with comprehensive, standards-based security

for encryption and key management. Pivot3’s new platform enhancements

also facilitate regulatory compliance as customers acquire and manage

data across the edge, core and cloud.


Pivot3 Delivers Policy-Based Security for Hybrid Cloud Solutions

“With security threats on the rise and regulations around data security

increasing, CIOs and CISOs face new challenges as they seek to protect

sensitive, mission-critical data without compromising performance,” said

John Spiers, vice president of strategy at Pivot3. “Customers rely on us

every day to simplify management at scale. This extension of Pivot3’s

Intelligence Engine brings new confidence to IT in knowing that data can

be secured and protected as it moves across the entire hybrid cloud

infrastructure.”

Pivot3’s Intelligence Engine enhances application performance, data

placement, data protection, and monitoring and analytics, enabling

customers to confidently consolidate multiple, mixed-application

workloads on HCI while reducing time, cost and complexity. The addition

of automated, policy-based security management capabilities to the

Pivot3 Intelligence Engine enables customers to implement easy to use

policies to seamlessly integrate data encryption and key management into

the same workflow for managing applications and storage.

To address the increased regulation and compliance requirements, the new

capability includes flexible, secure multi-tenancy and data-at-rest

encryption at a system, volume or virtual machine level, compliant with

Federal Information Processing Standards (FIPS) 140-2. Pivot3 designed

its data encryption algorithms to leverage Intel Xeon CPUs AES New

Instructions (AES NI) to ensure minimal performance impact and low

overhead. Key management is integrated into the new security policies

and supports the Key Management Interoperability Protocol (KMIP)

standards to provide broad support of key managers.

As part of its security portfolio, Pivot3 offers HyTrust KeyControl,

which is integrated seamlessly with Pivot3’s policy-based security

management. This enables enterprises to easily manage all encryption

keys at scale, including how often keys are rotated and how they are

shared securely.

“With the growth of hyperconverged infrastructure and the expectation

that enterprises will increasingly have mixed workload environments, it

is essential to employ key management as an integral part of securing

the infrastructure,” said Eric Chiu, Co-Founder and President of

HyTrust. “Pivot3’s Intelligence Engine will further simplify security

management for enterprises and extend the industry preference for

HyTrust KeyControl in a world where data must be secured no matter where

it resides.”

“As organizations evolve to address emerging security threats and

changing regulations, automation and intelligence become key ingredients

in maintaining a secure infrastructure across the edge, core and cloud,”

said Tim Stammers,senior analyst at 451 Group. “By adding policy-based

security management features to its Intelligence Engine, Pivot3 is

simplifying the process of protecting sensitive and mission-critical

data and enabling customers to progress toward a software-defined

datacenter.”

Pivot3’s policy-based security management will be available in Acuity

version 10.6 before the end of the year. For more information, please

visit this

feature

brief

.

About Pivot3

Pivot3’s intelligent hybrid cloud and IoT solutions provide security,

resilience and management simplicity at scale for customers’

mission-critical environments. Powered by the industry’s only

Intelligence Engine, Pivot3 automates the management of multiple, mixed

application workloads, delivers industry-leading performance at scale,

eliminates unplanned downtime and reduces the cost of traditional IT

infrastructure by half or more. With over 2,600 customers in 64

countries with deployments in education, hospitality, transportation,

government, healthcare, defense, gaming, financial services and retail,

Pivot3 allows IT to manage complexity at scale through intelligence and

automation. Visit www.pivot3.com

to learn more.

Contacts

For Pivot3

Liz Cies, 972-850-5855

lcies@ideagrove.com
Pivot3 Delivers Policy-Based Security for Hybrid Cloud Solutions
Do you think you can beat this Sweet post? If so, you may have what it takes to become a Sweetcode contributor...Learn More.

Brazil’s Banco Votorantim Selects NETSCOUT’s nGeniusOne

$
0
0
WESTFORD, Mass. (BUSINESS WIRE) lt;a href=”https://twitter.com/search?q=%24NTCT&src=ctag” target=”_blank”gt;$NTCTlt;/agt; lt;a href=”https://twitter.com/hashtag/DDoS?src=hash” target=”_blank”gt;#DDoSlt;/agt;

NETSCOUT

., (NASDAQ: NTCT), a leading provider of service

assurance, security, and business analytics, announced today that Banco

Votorantim, one of Brazil’s largest banks, has deployed the

nGenius

and its Arbor DDoS solutions to ensure

total service availability and optimized performance of its IT

infrastructure and key applications used by its employees.


Brazil’s Banco Votorantim Selects NETSCOUT’s nGeniusOne

“By showing the traffic and behavior of applications, in a non-intrusive

and uncomplicated way, the NETSCOUT technology allows for the monitoring

of the actual user experience and the quality of the services provided

by IT,” said Marcelo Maylinch, head of IT infrastructure, Banco

Votorantim. “This has reduced the time spent in identifying problems

involving network and applications, enabling us to quickly solve

incidents before they become a risk for the bank’s business continuity.”

“It is rewarding to know that we are playing an important role in

improving Votorantim’s network and digital services performance, which

is enabling it to succeed in an ever-competitive marketplace,” said

Geraldo Guazzelli, country manager, Brazil, NETSCOUT.

Performance and Service Excellence

As a major financial institution, Banco Votorantim operates a complex

enterprise network to support its locations in Brazil and around the

world. Customers interact with the bank at branch locations,

electronically through web services, and on the phone with their call

centers. As its customer base grew, Votorantim’s IT management required

greater visibility into voice and data application services across the

network for in-depth, real-time analysis, views and reports that could

be used by any of the members of their IT team, as well as help inform

their broader management team. Recognizing that any disruption in

critical services could slow key transactions and jeopardize customer

relationships, Votorantim also needed an anti-DDoS provider that could

protect them on premise and from the cloud. NETSCOUT was able to help on

both of these important fronts.

Banco Votorantim turned to NETSCOUT to provide visibility and service

assurance in the bank’s data centers and branch offices. By using the

nGenius

to efficiently feed vital traffic flows from the

data centers to several InfiniStreamNG appliances

and transform wire-data into smart data, NETSCOUT’s nGeniusONE analytics

monitor the bank’s applications including revenue-critical services,

such as online banking, credit card management and authorization,

automobile loans, and wealth management, as well as all applications

used by associates in the branches. To gain deeper insight into

application performance, user experience and service dependencies while

also adapting to the organization’s security policies and strategy,

Banco Votorantim is also using complementary NETSCOUT tools such as

OptiView

,

TruView

and nGeniusPulse for

synthetic performance testing and portable troubleshooting, ensuring

better day-to-day operations in all of the bank’s agencies and offices.

To ensure network availability and security against different types of

DDoS threats, the protection offered by the combination of the

NETSCOUT

(formerly known as NETSCOUT Arbor APS) and

NETSCOUT

provides a complete view of the network’s

activities, enabling rapid and automatic threat blocking before it can

affect applications and services. Preferring to count on an independent

provider, rather than ISP’s, Banco Votorantim adopted Arbor Cloud, a

fully managed, best practices hybrid cloud DDoS protection program. By

realizing that large volumetric attacks can interrupt ISP services, the

bank’s IT staff opted for a provider with a global presence and the

ability to clear attacks closer to their source.

About NETSCOUT

NETSCOUT SYSTEMS, INC. (NASDAQ: NTCT) assures digital business services

against disruptions in availability, performance, and security. Our

market and technology leadership stems from combining

人工智能与网络安全的跨界论道:ASCC 2018在渝开幕

$
0
0

人工智能与网络安全的跨界论道:ASCC 2018在渝开幕

伴随着互联网的诞生,关于网络安全(CyberSecurity)议题的探讨从未停止。 从广义上来说,凡是涉及到网络上信息的保密性、完整性、可用性、真实性和可控性的相关技术和理论都是网络安全的研究领域,这是一门涉及计算机科学、网络技术、通信技术、密码技术、信息安全技术、应用数据、数论、信息论等多种学科的综合性科学。

网络安全的本质是网络上的信息安全,而在人工智能时代,海量数据的诞生以及随时随地发生的数据交换,使得网络安全问题变得更为复杂,也需要引起更多人的关注和重视。人工智能时代应该如何认识网络安全问题,二者又将如何更好地交叉融合?

在此背景下,2018年12月11-12日,由 国际信息安全与数据分析协会(ICSDS)、重庆市科协主办,重庆科技学院和重庆中科智能技术与工程学院承办的2018人工智能与网络安全新技术论坛(ACSS Summit 2018)在重庆举行, 本次论坛聚焦于人工智能与网络安全“跨界、跨学科、跨领域”的发展 ,国际信息安全与数据分析协会主题庞韶宁、美国伊利诺伊大学芝加哥分校俞士纶教授、香港中文大学金国庆教授、日本国立信息通信研究所井上大介博士、深圳创新投资集团国际部总经理盛波等20多位学术界、工业届和投资界的知名专家教授参会会议,并就相关问题展开讨论。亿欧作为支持单位进行现场报道。


人工智能与网络安全的跨界论道:ASCC 2018在渝开幕

重庆科技学院智能技术与工程学院教授,ACSS 2018大会主席彭军主持会议

在11日的开幕式上,本次活动的主办方之一,重庆市科学技术协会副巡视员牛杰介绍了近年来重庆市在人工智能领域所取得的成绩。


人工智能与网络安全的跨界论道:ASCC 2018在渝开幕

牛杰表示,近年来重庆市一直高度重视人工智能的发展,以及随之产生的网络安全问题。随着2017年7月重庆市《新一代人工智能发展规划》的颁布,“提升新一代人工智能科技创新能力为主攻方向,发展智能经济,建设智能社会,维护国家安全”的发展战略已经在重庆初见成效,预计到2020年重庆市智能产业的产值可以达到7500亿元,基本建成国家重要的智能产业基地和全国一流的大数据智能化应用示范之城。


人工智能与网络安全的跨界论道:ASCC 2018在渝开幕

拥有者60多年的重庆科技学院,近年来在人工智能、安全生产信息化领域不断深耕,并与中科院重庆学院建成了重庆中科智能技术与工程学院。重庆科技学院校长尹华川在开幕式上向与会观众阐述了学校在人工智能方向的新动向:依托智能学院,建成了教育部ICT产教融合创新基地、人工智能与机器人重庆市重点实验室、在线分析与控制重庆高校重点实验室、在线分析与大数据重庆市工程技术研究中心等国家级、省部级教学及科研平台,为重庆市大数据智能化发展贡献力量。“希望本次大会为世界各地的研究人员、工程师和学者们提供学术交流、展现成果、合作共赢的平台。” 尹华川在致欢迎辞时讲到。

本次大会主办方,国际信息安全与数据分析协会(ICSDS)主席,新西兰理工学院计算机系终身教授庞韶宁,向与会嘉宾介绍了本次活动的初衷: 随着越来越多的人工智能应用到不同领域,在提高生活便利性,促使工作效率提升的同时,这些复杂的人工智能系统也将面临更多的潜在风险和威胁,换言之更多的网络安全问题将会出现 。因此,ICSDS与重庆科协、重庆科技学院等单位共同举办了ACSS论坛,希望能够促进产业界和学术界的研究合作,并通过引入风险投资,期望将这些创新性的技术转变成为初创企业的技术动能。


人工智能与网络安全的跨界论道:ASCC 2018在渝开幕

据了解,ICSDS成立于2008年。在当年的神经信息处理国际会议(International Conference on Neural Information Process)上,几位来自不同国家、不同研究机构的学者们在会议上相遇,在相似的研究背景和共同理念的支持下,成立了国际信息安全与数据分析协会(ICSDS)。目前,ICSDS在新西兰、日本、中国、韩国、澳大利马来西亚、泰国和阿联酋等8个国家和地区拥有理事会成员。而在今天的ACSS峰会上,同样是协会成员的日本国立信息通讯研究所班涛博士、澳大利亚联邦大学因特网商务安全研究室主任伊克巴贡达尔教授、中国西交利物浦大学电气与电子工程系主任黄开竹教授也分别用自身研究领域切入,向与会观众报告了最新的研究进展,并与来自中国、美国、新西兰、日本、澳大利亚、斯里兰卡、印度、中国香港等8个国家和地区的学者进行交流。

成立至今,ICSDS每年都会举办一次国际网络安全数据挖掘比赛(International Cybersecurity Data Mining Competition ,即CDMC),至今已走过了九年。从最初只有十几只队伍参赛,到今年来自50个国家的124支参赛队伍,CDMC参赛队伍在不断壮大的同时,背后的重点实则是应用数据分析的技术解决现实中碰到的网络安全方面的问题。在12日的信息技术学院院长论坛上,将为CDMC2018的全球前四名颁奖。


人工智能与网络安全的跨界论道:ASCC 2018在渝开幕

在由庞韶宁教授主持的圆桌讨论环节,美国伊利诺大学芝加哥分校计算机科学系特聘教授俞士纶,香港中文大学教授、工程学院副院长、国际神经网络协会INNS候任主席、亚太神经网络协会副主席金国庆,新西兰Unitec理工许愿交叉科学应用技术研究院院长克里斯蒂安普罗布斯特教授,耀途资本合伙人杨光等来自人工智能学界、网络安全学界以及投资界的专家学者,就目前各自的关注领域和未来产业发展方向进行探讨。目前,网络安全的边界逐渐模糊,越来越多的攻击手段层出不穷。但随着人工智能技术的不断进步,以及应用的逐渐成熟,如何将人工智能与网络安全更好的结合起来,成为了业界关注的焦点和未来发展方向。而杨光也表示,未来两三年网络安全也会成为其关注的一个方向。

版权声明

凡来源为亿欧网的内容,其版权均属北京亿欧网盟科技有限公司所有。文章内容系作者个人观点,不代表亿欧对观点赞同或支持。

The Key To Turning Your Security Program Into A Marketing Asset While Staying Se ...

$
0
0

It is often said, “if you don’t want something noticed, don’t talk about it”. This is true of a bad GPA, a stain on a carpet, or a project you might have missed a deadline for. Many security leaders see their security programs in this way too talking about your cyber program is an unnecessary risk. It draws attention to your organization both internally and externally talking about the strength of your security program to executive management can make an inevitable attack all the more devastating, and using your security program as a marketing asset was thought to draw a target on your back.

When you think about information security prior to digitization, continuous compliance was nigh impossible let alone necessary. Information was locked in physical filing cabinets with a finite number of keys, facilities were monitored by a human who would recognize strangers, and everything was in-person. Today, filing cabinets are in the cloud (on servers you’ve probably never seen if they’re even private), keys have become passwords, and teams are scattered across the globe.

Obviously, the benefits of digitization far outweigh the risks: great access to more talent, ability to store and access more data, and overall deliver greater experiences to customers. For security teams, though, a change in approach for what risk management and compliance mean is necessary.

What we are seeing now, as well, is a shift in the mindset of consumers (both business and individuals). They are becoming more technology aware demanding to know where their information is stored and how it’s used. This combined with the tools enabling teams to practice continuous compliance, empowers a security team to be proud of their efforts and use it as a selling point for the company.

How to talk about your cyber program

Drawing upon an analysis of the two largest cloud providers: Microsoft and AWS, we’ve seen trends emerge for best practices on how to talk about your cyber program and we’ll dispel some myths about marketing your security program

Say what not how

Many security professionals see talking about their programs as a means of giving away their process and allowing malicious actors insight into how the security team operates. Not so effective marketing is done through discussing outcomes, not process. As a consumer, you want to know what a product will do for you, not how it does it. With security as a selling point, you want to educate your marketing team on the benefits of your security program: from a high-level, what are you doing that is better or different than your competitors?

In this case, examples work best. See AWS discuss their controls for their data center security here .

Talk about the strategy, not the tactics

The devil is in the details, the more granular you get the easier it is for a criminal to spot a potential opening. Collaborate with your marketing team to shape talking points that illustrate your robust security program without discussing specifics. Again, it’s about the what not the how.

It is possible

As we’ve seen with digitization, turning your security program into a marketing asset can outweigh the risks. With a more educated customer base simply saying “we’re secure” is no longer sufficient. The first step is using continuous compliance to ensure your environments are as secure as possible and you have the ability to view their security posture in a single-pane-of-glass . Next, collaborate with your marketing team to craft your value propositions and hone the messaging around the security program. As the digital revolution continues, security will increasingly become a differentiator. We are already seeing it with the internet of things . Be prepared and start shifting towards continuous compliance today.

It is often said, “if you don’t want something noticed, don’t talk about it”. This is true of a bad GPA, a stain on a carpet, or a project you might have missed a deadline for. Many security leaders see their security programs in this way too talking about your cyber program is an unnecessary risk. It draws attention to your organization both internally and externally talking about the strength of your security program to executive management can make an inevitable attack all the more devastating, and using your security program as a marketing asset was thought to draw a target on your back.

When you think about information security prior to digitization, continuous compliance was nigh impossible let alone necessary. Information was locked in physical filing cabinets with a finite number of keys, facilities were monitored by a human who would recognize strangers, and everything was in-person. Today, filing cabinets are in the cloud (on servers you’ve probably never seen if they’re even private), keys have become passwords, and teams are scattered across the globe.

Obviously, the benefits of digitization far outweigh the risks: great access to more talent, ability to store and access more data, and overall deliver greater experiences to customers. For security teams, though, a change in approach for what risk management and compliance mean is necessary.

What we are seeing now, as well, is a shift in the mindset of consumers (both business and individuals). They are becoming more technology aware demanding to know where their information is stored and how it’s used. This combined with the tools enabling teams to practice continuous compliance, empowers a security team to be proud of their efforts and use it as a selling point for the company.

How to talk about your cyber program

Drawing upon an analysis of the two largest cloud providers: Microsoft and AWS, we’ve seen trends emerge for best practices on how to talk about your cyber program and we’ll dispel some myths about marketing your security program

Say what not how

Many security professionals see talking about their programs as a means of giving away their process and allowing malicious actors insight into how the security team operates. Not so effective marketing is done through discussing outcomes, not process. As a consumer, you want to know what a product will do for you, not how it does it. With security as a selling point, you want to educate your marketing team on the benefits of your security program: from a high-level, what are you doing that is better or different than your competitors?

In this case, examples work best. See AWS discuss their controls for their data center security here .

Talk about the strategy, not the tactics

The devil is in the details, the more granular you get the easier it is for a criminal to spot a potential opening. Collaborate with your marketing team to shape talking points that illustrate your robust security program without discussing specifics. Again, it’s about the what not the how.

It is possible

As we’ve seen with digitization, turning your security program into a marketing asset can outweigh the risks. With a more educated customer base simply saying “we’re secure” is no longer sufficient. The first step is using continuous compliance to ensure your environments are as secure as possible and you have the ability to view their security posture in a single-pane-of-glass . Next, collaborate with your marketing team to craft your value propositions and hone the messaging around the security program. As the digital revolution continues, security will increasingly become a differentiator. We are already seeing it with the internet of things . Be prepared and start shifting towards continuous compliance today.

Roles and Responsibilities of Information Security Auditor

$
0
0

Most people break out into cold sweats at the thought of conducting an audit, and for good reason. Auditing the information systems of an organization requires attention to detail and thoroughness on a scale that most people cannot appreciate. There are system checks, log audits, security procedure checks and much more that needs to be checked, verified and reported on, creating a lot of work for the system auditor. Becoming an information security auditor is normally the culmination of years of experience in IT administration and certification.

It is for this reason that there are specialized certifications to help get you into this line of work, combining IT knowledge with systematic auditing skills. We will go through the key roles and responsibilities that an information security auditor will need to do the important work of conducting a system and security audit at an organization. Not all audits are the same, as companies differ from industry to industry and in terms of their auditing requirements, depending on the state and legislations that they must abide by and conform to.

This article will help to shed some light on what an information security auditor has to do on a daily basis, as well as what specific audits might require of an auditor.

Basic Duties List

Information security audits are conducted so that vulnerabilities and flaws within the internal systems of an organization are found, documented, tested and resolved. The findings from such audits are vital for both resolving the issues, and for discovering what the potential security implications could be. Security breaches such as data theft, unauthorized access to company resources and malware infections all have the potential to affect a business’s ability to operate and could be fatal for the organization.

In order to discover these potential security flaws, an information security auditor must be able to work as part of a team and conduct solo operations where needed. Determining the overall health and integrity of a corporate network is the main objective in such an audit, so IT knowledge is essential if the infrastructure is to be tested and audited properly. Issues such as security policies may also be scrutinized by an information security auditor so that risk is properly determined and mitigated.

Information security auditors are not limited to hardware and software in their auditing scope. In fact, they may be called on to audit the security employees as well. Members of staff may be interviewed if there are questions that only an end user could answer, such as how they access certain resources on the network. Members of the IT department, managers, executives and even company owners are also important people to speak to during the course of an audit, depending on what the security risks are that are facing the organization.

Roles and Responsibilities on the Job

Information security auditors are usually highly qualified individuals that are professional and efficient at their jobs. They are able to give companies credibility to their compliance audits by following best practice recommendations and by holding the relevant qualifications in information security, such as a Certified Information Security Auditor certification (CISA).

They must be competent with regards to standards, practices and organizational processes so that they are able to understand the business requirements of the organization. This helps them to rationalize why certain procedures and processes are structured the way that they are and leads to greater understanding of the business’s operational requirements.

Auditing a business means that most aspects of the corporate network need to be looked at in a methodical and systematic manner so that the audit and reports are coherent and logical. Auditors need to back up their approach by rationalizing their decisions against the recommended standards and practices.

This means that any deviations from standards and practices need to be noted and explained. The planning phase normally outlines the approaches that an auditor will take during the course of the investigation, so any changes to this plan should be minimal.

Additional Job Requirements

The role of security auditor has many different facets that need to be mastered by the candidate ― so many, in fact, that it is difficult to encapsulate all of them in a single article. However, we’ll lay out all of the essential job functions that are required in an average information security audit. First things first: planning.

The planning phase of an audit is essential if you are going to get to the root of the security issues that might be plaguing the business. You will be required to clearly show what the objectives of the audit are, what the scope will be and what the expected outcomes will be.

You will need to execute the plan in all areas of the business where it is needed and take the lead when required. You’ll be expected to inspect and investigate the financial systems of the organization, as well as the networks and internal procedures of the company. All of these systems need to be audited and evaluated for security, efficiency and compliance in terms of best practice. All of these findings need to be documented and added to the final audit report.

Strong communication skills are something else you need to consider if you are planning on following the audit career path. Looking at systems is only part of the equation as the main component and often the weakest link in the security chain is the people that use them. This means that you will need to interview employees and find out what systems they use and how they use them. By conducting these interviews, auditors are able to assess and establish the human-related security risks that could potentially exist based on the outcomes of the interviews.

After the audit report has been completed, you will still need to interact with the people in the organization, particularly with management and the executives of the company. This means that you will need to be comfortable with speaking to groups of people. You will need to explain all of the major security issues that have been detected in the audit, as well as the remediation measures that need to be put in place to mitigate the flaws in the system.

Something else to consider is the fact that being an information security auditor in demand will require extensive travel, as you will be required to conduct audits across multiple sites in different regions. The amount of travel and responsibilities that fall on your shoulders will vary, depending on your seniority and experience.

Conclusion

The roles and responsibilities of an information security auditor are quite extensive, even at a mid-level position. This is by no means a bad thing, however, as it gives you plenty of exciting challenges to take on while implementing all of the knowledge and concepts that you have learned along the way.

Auditing is generally a massive administrative task, but in information security there are technical skills that need to be employed as well. With the right experience and certification you too can find your way into this challenging and detailed line of work, where you can combine your technical abilities with attention to detail to make yourself an effective information security auditor.


Facing the Fear and Securing the Internet of Things (IoT)

$
0
0

Facing the Fear and Securing the Internet of Things (IoT)
It’s official. There are officially more IoT devices than humans in the world. And by 2020, there will be twice as many of them than us. It can be a sobering thought, especially if you’ve ever seen the 1980s Stephen King movie Maximum Overdrive , where the machines came alive and turned homicidal. And according to a recent study by Gemalto, 90% of consumers said they don’t have confidence in IoT security. Furthermore, the study found that only 50% of companies have adopted a “security by design” approach, and more than half of consumers have concerns about their IoT devices being hacked or their data stolen. So maybe I’m not alone in my fears?

Even scarier is the fact enterprise security breaches are happening more often, with more brute force and at higher costs. There are an average of thirteen enterprise security breaches every day, resulting in roughly 10 million records lost a day―or 420,000 every hour. Security researchers are quick to point out the vulnerabilities of connected devices and the potential harm of connecting to a device that has not been properly secured.

But with a hint of apprehension comes big opportunities. Gartner predicts 20.4 billion IoT devices will come online by 2020. When taking into account the value created from technology, as well as the potential for new market opportunities, it is estimated that the IoT will generate $14.4 trillion in net profit for enterprises over the next decade.

Security challenges

One thing is clear; organizations across all industries need to begin consider how they will secure their IoT devices, but face challenges when they try to create solutions in house. IoT security threats are increasingly complex and constantly changing. Recent findings from 451 Group’s report 4Sight report: As Infrastructure Becomes Invisible, We are All Service Providers , show 57% of organizations face skills shortages and lack cloud expertise in areas such as architecture, operations, and security. And when trying to recruit for the required cloud expertise, 30% of organizations find it “very difficult”.

The skills shortage and complex threatscape, combined with an astounding 60-70% of all IT enterprises expected to invest majorly in cloud-based solutions by 2020 is very good news for Managed Security Service Providers (MSSP) who want to get in on the IoT bandwagon.

Getting in on the action

A report by ABI Research, IoT Managed Security Services to See Significant Financial Impact from Industrial Applications , says by 2021, overall market revenues for IoT managed security services are poised to surpass $11 Billion, a fivefold increase. The firm predicts the needs for IoT managed security services will initially be driven by the industrial internet (interconnected machines and devices and intelligent analytics).

It’s also looking like new innovative use cases, such as connected vehicles, smart cities and utilities will drive future MSSP revenues, gradually shifting away from traditional markets such as manufacturing, transportation, and oil and gas.

Gearing up for success

Organizations are looking for more from an MSSP than just a security solution. They need a partner (or partners) who will offer expertise, guidance and support. And there is evidence that security providers are coming together to integrate solutions and employ security across the entire IoT ecosystem. The industry is moving into a new era securing the IoT, where encryption, cryptography, identity issuance and access management are a full-stack solution and not individual components. This way, security is built-in and is no longer an after-thought or a challenge. It becomes invisible and just happens, seamlessly, and securely.

Cloud-based data protection for managed service providers

As an MSSP you can take advantage of the immediate need to offer an IoT security solution that you can brand, bundle with your cloud or security services, and offer your customers a way to augment their security, effortlessly. By leveraging reliable, repeatable and profitable services, aligned to your business model, you can ensure the stickiness of satisfied customers, building in a range of security services, with single pane of glass management, across multiple clouds.

Gemalto offers such a solution with its SafeNet Data Protection On Demand , a cloud-based platform that provides a wide range of on-demand key management and encryption services through a simple online marketplace.

Take a look at SafeNet Data Protection On Demand , or dive right in with a free 30-day evaluation .

The thought of a world of unprotected “things” can be a scary place, especially if you are a fan of horror movies like me. But it’s also an incredible opportunity for service providers to take control by offering robust security solutions developed for the increasingly complex, connected world.

SMG Comms Chapter 11: Arbitrary Size of Public Exponent

$
0
0

~ This is a work in progress towards an Ada implementation of Eulora's communication protocol. Start withChapter 1.~

The change from fixed-size to arbitrary size of the public exponent "e" of RSA keysturned out to be of course more than just introducing a parameter for e's size - all I can say about it is that I was rather ready for this development by now , given the known mess that is the MPI lib . So the only surprise here was the exact way in which MPI fails rather than the fact that it does fail. Let's take it slowly and in detail since it is - like everything else related to encryption - rather important to have it as clear as one can make it.

On the SMG Comms side, a shorter "e" is really no trouble at all: basically there is no such thing as "shorter" since it can perfectly be the same size as it always was, only starting with whatever number of 0s it needs to make up to the expected length, big deal. And this is in fact already handled and handled well in the wrappers I wrote previously for the RSA and MPI C code , since it was already clear that yes, any set of octets might start at any time with 0s but that doesn't make them fewer octets or anything of the sort. So at first look, there really isn't any need to change anything, since the "change" required is neatly and natively handled as what it is - just a specific case of the more general operation that is implemented. Still, for clarity and further use downstream, I decided to add a constant and a new subtype to Raw_Types simply for the purpose of providing an explicit way of using the exactly-8-octets-long e (neither the new constant nor the new subtype are put to use so far by any of the message pack/unpack or read/write methods):

-- RSA public exponent (e) size in octets -- NB: this should normally match the E_LENGTH_OCTETS in smg_rsa.h -- NOT imported here for the same reason given at RSA_KEY_OCTETS above E_LENGTH_OCTETS : constant Positive := 8; subtype RSA_e is Octets( 1 .. E_LENGTH_OCTETS);

On the RSA side, the same constant "length of e in octets" goes into include/smg_rsa.h since it is not exactly a knob of the whole thing but rather a parameter for RSA key generation:

/** * This is the length of the public exponent e, given in octets. * TMSR standard e has KEY_LENGTH_OCTETS / 2 octets. * Eulora's communication protocol uses however e with 8 octets length. * New keypairs generated will have e precisely this length. * Change this to your preferred size of e for generating new keys with that size of e. * NB: this impacts key generation ONLY! (i.e. NOT encrypt/decrypt). */ static const int E_LENGTH_OCTETS = 8;

As the comments above stress, the "length of e" should normally be a concern in the code only when generating a new key pair; at all other times (encrypt/decrypt), the e that is provided will be used, whatever length it might be. Looking at the key generation code, the change to make is minimal since the code is sane - simply replace a local variable that specified the length of the required prime with the new global constant that now specifies the user's choice of length for e (in rsa/rsa.c, function gen_keypair):

/* choose random prime e, public exponent, with 3 < e < phi */ /* because e is prime, gcd(e, phi) is always 1 so no need to check it */ do { gen_random_prime( E_LENGTH_OCTETS, sk->e); } while ( (mpi_cmp_ui(sk->e, 3) < 0) || (mpi_cmp(sk->e, phi) > 0));

Following the changes above, a re-read of all my rsa and smg_comms code confirmed that no, there is nothing else to change - it is after all just a matter of exposing a constant to the user, not any change of the underlying algorithm, so that's surely all, right? Well, no, of course it's not, because at the lower level, all those 0-led smaller "e" go into the MPI lib of gnarly entrails. And as it turns out, a quick test whipped out to see the whole thing in action got...stuck, going on for ever somewhere in the MPI code. Where? Well, the stack trace goes 8 levels deep into the MPI code and it looks (at times, as it rather depends on where one stops the neverending run...) like this:

#0 0x000000000040b492 in mpihelp_addmul_1 () #1 0x0000000000407cd4 in mpih_sqr_n_basecase () #2 0x0000000000407e68 in mpih_sqr_n () #3 0x0000000000408098 in mpih_sqr_n () #4 0x0000000000407d6b in mpih_sqr_n () #5 0x0000000000408098 in mpih_sqr_n () #6 0x0000000000407d6b in mpih_sqr_n () #7 0x0000000000405693 in mpi_powm ()

Claiming that one fully knows what goes on in 8 piled levels of MPI calls is rather disingenous at best so I won't even go there. However, a closer look at all that code, starting with mpi_powm and following the code seems to suggest that the issue at hand is that MPI simply can't handle correctly 0-led numbers. To which one should add of course "in some cases" so that one can't just say fine, wtf is it doing permitting any 0-led numbers then?? No, that would be too easy so the reality is that it permits and it probably even *requires* in places 0-led numbers but *in other places* it gets stuckon them. Aren't you happy to have followed this mess so far to such amazing conclusion? At any rate, going through the MPI code yields some more fun of course, such as this fragment in mpi-pow.c:

/* Normalize MOD (i.e. make its most significant bit set) as required by * mpn_divrem. This will make the intermediate values in the calculation * slightly larger, but the correct result is obtained after a final * reduction using the original MOD value. */ mp = mp_marker = mpi_alloc_limb_space(msize, msec); count_leading_zeros( mod_shift_cnt, mod->d[msize-1] ); if( mod_shift_cnt ) mpihelp_lshift( mp, mod->d, msize, mod_shift_cnt ); else MPN_COPY( mp, mod->d, msize );

The obvious clue there is that at least one culprit of "can't handle 0-led numbers" is that divrem function that is indeed called as part of the exponentiation. But the added joke that's for insiders only is that the normalization there is done ad-hoc although there exists a function precisely for...normalizing aka trimming the leading 0s from an mpi! Eh, so what if it exists - by the time something gets as big and tangled as MPI, chances are nobody remembers everything there is but that's not a problem at all, right? But wait, just ~10 lines further down, there is another normalization and at *that* place, the author somehow remembered that there is even a macro defined for this purpose! And oh, another ~10 lines further, there is yet another way in which normalization is done on the spot (shifting the bits directly!). So what is it already that makes one write code like this, some misplaced purple-prose inclination, let's not repeat the same expression or what exactly? Frankly the only logical answer is that it's done on purpose - anything and everything to increase the number of lines of code. Increase productivity!!

Moving further, it turns out that this very same function actually *does* trim the leading 0s off the exponent at some point! Which of course begs now the question of just how and why is then a problem to give it a 0-led exponent? Essentially it trims it but too late/not fully/not for everything and not everywhere that it should do it, that's the best I can say about it. And overall, the fact of the matter is simply that MPI just doesn't correctly handle 0-led MPI values, end of story. To quote from MPI code and comments themselves, the author's explanation:

/**************** * Sometimes we have MSL (most significant limbs) which are 0; * this is for some reasons not good, so this function removes them. */

So it is "for some reasons not good", mmkay? It reminds me of the other display of great expertise in "reasons" . Without wasting even more time on the MPI code of wonders, the solution for SMG Comms is essentially a work around: the C wrappers get another job, namely to ensure that the values passed on to MPI are normalized. Note that the symmetrical opposite of this, namely adding missing leading 0s is already implemmented where needed (in the Ada code that actually deals perfectly fine with 0-led values since they are not oh-so-special, really). Thankfully, this is a very simple thing to do: instead of using directly the mpi_set_buffer method to set the value of an mpi number, define an mpi_set_normalized method that calls mpi_set_buffer + mpi_normalize:

void mpi_set_normalized(MPI m, const char *buffer, unsigned int noctets, int sign) { mpi_set_buffer( m, buffer, noctets, sign ); mpi_normalize( m ); }

Using the above code, all the mpi_set_buffer calls in c_wrappers are replaced by mpi_set_normalized calls and so there are no more 0-led mpi values passed on to the C code when calling rsa from Ada (since this is the purpose of those c_wrappers: to provide a sane interface for Ada to work with the insanity of C for RSA needs). Obviously, if you insist on calling the C rsa encrypt/decrypt methods directly, it's up to you to make sure you don't pass them 0-led values. While I could change the encrypt/decrypt methods themselves to normalize all the keys' components before doing anything, I think that's a very ugly and ultimately incorrect thing to do: the encrypt/decrypt should use precisely what they are given, not go about tampering with the inputs, regardless of "reasons". Yes, it is ugly and incorrect that MPI forces this normalization nonsense but that's not a justification for messing the encrypt/decrypt functions to cover up for it.

Note also that I specifically chose NOT to include the normalization in the existing method mpi_set_buffer because on one hand it's not the job of mpi_set_buffer to trim its inputs and on the other hand there is a need for mpi_set_buffer precisely as it is: there is code in there relying on being able to set the buffer of an mpi to anything, including 0-led vectors (even if at times, that doesn't remain 0-led for long). So no, modifying mpi_set_buffer is not a good option, even without considering the fact that MPI is better thrown awaythan changed.

The rest of the .vpatch for this chapter of SMG Comms contains simply the 2 additional tests (and changes needed for them in the test environment) that I wrote: one for the RSA c code, to flag the issue and one for the Ada code to ensure that there is at least one test with an exponent of 8 octets. I've used first the rsa code with the length of e set to 8 to generate a pair of RSA keys that are used then for the new test. So there is now a new file, "8_keys.txt" containing this new set of keys and the Ada test is simply another call with different parameters to read its input from this file as opposed to another.

Given that the arbitrary size of e touches essentially EuCrypt code, I also packed those minimal changes to smg_rsa.h and to key generation, together with the new test using a shorter e, into a .vpatch for EuCrypt. I've also added in there stern warnings at the encrypt/decrypt regarding the 0-led issue since it is the responsibility of the caller to either make sure they don't provide 0-led values or otherwise deal with a potentially-blocking call. Both .vpatch files and their corresponding signatures are on myReference Code Shelf as well as linked here for your convenience:

CNCERT:关于SQLite远程代码执行漏洞的安全公告

$
0
0
安全公告编号:CNTA-2018-0031

2018年12月10日,国家信息安全漏洞共享平台(CNVD)收录了由腾讯安全平台部Tencent Blade团队发现并报告的SQLite远程代码执行漏洞(CNVD-2018-24855)。攻击者利用该漏洞,可在未授权的情况下远程执行代码。目前漏洞利用细节尚未公开。

一、漏洞情况分析

SQLite作为嵌入式数据库,支持大多数SQL标准,实现了无服务器、零配置、事务性的SQL数据库引擎,在网页浏览器、操作系统、嵌入式系统中使用较为广泛。Web SQL数据库是引入了一套使用SQL操作客户端数据库的API,以SQLite作为底层实现,可在最新版的Chrome/Chromium浏览器运行。

Chromium官方发布了11月份安全漏洞公告,其中包含SQLite远程代码执行漏洞。该漏洞通过调用Web SQL API,临时创建数据库,并恶意修改SQLite数据库内部表,使代码运行至错误分支。之后,攻击者就可通过调用SQLite的数据库索引操作触发漏洞,实现对浏览器的远程攻击,在浏览器的渲染器(Render)进程执行任意代码。

同时,作为基础组件库的SQLite也作为扩展库被许多程序使用,例如phppython等等,攻击者可通过同样的攻击代码,在这些进程的上下文中本地或远程任意执行代码,或导致软件拒绝服务。

CNVD对该漏洞的综合评级为“高危”。

二、漏洞影响范围

根据官方公告情况,该漏洞的影响版本如下:

Chrome浏览器71.0.3578.80以下版本

使用Chromium内核的浏览器软件

Android手机WebView组件及使用WebView组件的第三方App

使用SQLite组件和SQLite库的程序(尤其是可能接收外部恶意输入执行SQL语句的程序,例如PHP SQLite3组件)

三、漏洞处置建议

1、谷歌/SQLite官方修补方案

Chromium产品需更新至官方稳定版71.0.3578.80,或同步更新至代码版本Commit c368e30ae55600a1c3c9cb1710a54f9c55de786e及以上

(https://chromium.googlesource.com/chromium/src/+/c368e30ae55600a1c3c9cb1710a54f9c55de786e)。

SQLite及SQLite库产品需更新至3.26.0版本,该版本为目前的官方稳定版(https://www.sqlite.org/releaselog/3_26_0.html)。

2、临时解决方案:

(1)禁用WebSQL:编译时不编译third-party的sqlite组件

由于WebSQL没有任何规范,目前仅有Chrome、Safari支持。但是Safari也已经阉割了大部分sqlite功能。如果关闭此功能不影响产品,可禁用WebSQL。

验证方法:重新编译后的内核,应无法在控制台调用openDatabase函数。

(2)关闭SQLite中的fts3功能

如关闭此功能不影响产品,可禁用该功能。Safari在Webkit中关闭fts3的方案,请参考

https://github.com/WebKit/webkit/commit/36ce0a5e2dc2def273c011bef04e58da8129a7d6。

验证方法:执行如下javascript代码时,不返回{a:1}则表示已关闭该功能:

var db = openDatabase('xxxxx'+parseInt(Math.random()*10000).toString(),1, 'fts_demo', 5000000); db.transaction(function(tx) { tx.executeSql('create virtual table x using fts3(a,b);'); tx.executeSql('insert into x values (1,2);'); tx.executeSql('select a from x;', [], function (tx, results) { console.log(results.rows[0]); }); });

(3)使用腾讯QQ浏览器提供的浏览服务(https://x5.tencent.com/)

目前腾讯X5 SDK(v3.6.0.1371)已修复此漏洞,第三方Android APP可由Webview切换到X5内核,修复此漏洞。

附:参考链接:

(1)谷歌安全公告:

https://chromereleases.googleblog.com/2018/12/stable-channel-update-for-desktop.htm

(2)SQLite更新公告:

https://www.sqlite.org/releaselog/3_26_0.html

(3)SQLite漏洞详情页面:

https://blade.tencent.com/magellan/

感谢腾讯安全平台部对本公告提供的技术支持。

声明:本文来自CNVD漏洞平台,版权归作者所有。文章内容仅代表作者独立观点,不代表安全内参立场,转载目的在于传递更多信息。如需转载,请联系原作者获取授权。

CNCERT:关于ThinkPHP存在远程代码执行漏洞的安全公告

$
0
0
安全公告编号:CNTA-2018-0032

2018年12月11日,国家信息安全漏洞共享平台(CNVD)收录了Thinkphp远程代码执行漏洞(CNVD-2018-24942)。攻击者利用该漏洞,可在未授权的情况下远程执行代码。目前,漏洞利用原理已公开,厂商已发布新版本修复此漏洞。

一、漏洞情况分析

ThinkPHP采用面向对象的开发结构和MVC模式,融合了Struts的思想和TagLib(标签库)、RoR的ORM映射和ActiveRecord模式,是一款兼容性高、部署简单的轻量级国产PHP开发框架。

2018年12月9日,ThinkPHP团队发布了版本更新信息,修复一个远程代码执行漏洞。该漏洞是由于框架对控制器名没有进行足够的检测,导致在没有开启强制路由的情况下可远程执行代码。攻击者利用该漏洞,可在未经授权的情况下,对目标网站进行远程命令执行攻击。

CNVD对该漏洞的综合评级为“高危”。

二、漏洞影响范围

漏洞影响的产品版本包括:

ThinkPHP 5.0―5.1版本。

CNVD秘书处对使用ThinkPHP框架的网站服务器进行探测,数据显示全球使用ThinkPHP框架的服务器规模共有4.3万;按国家分布情况来看,分布前三的分别是中国(3.9万)、美国(4187)和加拿大(471)。

三、漏洞处置建议

目前,ThinkPHP厂商已发布新版本修复此漏洞,CNVD建议用户立即升级至最新版本:

https://blog.thinkphp.cn/869075

附:参考链接:

https://blog.thinkphp.cn/869075

声明:本文来自CNVD漏洞平台,版权归作者所有。文章内容仅代表作者独立观点,不代表安全内参立场,转载目的在于传递更多信息。如需转载,请联系原作者获取授权。

30万人黑客组织紧盯区块链 熊市当道攻击事件或将加剧

$
0
0

30万人黑客组织紧盯区块链 熊市当道攻击事件或将加剧

近两年, 加密货币 市场频现一夜暴富之人, 加密货币 的拥护者也一度激增, 加密货币 俨然被看作新型投资方式。而随着社会各界对 数字货币 的兴趣飙升,黑客攻击等网络安全已经成为 加密货币 行业日益关注的问题。

攻击事件频率大幅上升 经济损失半年逾27亿美元

在这些黑客攻击中,既有大规模的数据泄露事件,也有波及全球的黑客勒索。而据腾讯公布的《2018年上半年 区块链 安全报告》显示,基于 区块链 加密 数字货币 引发的安全问题造成的经济损失高达27亿美元。

从攻击事件发生的数量来看,据有关数据统计,早期 区块链 安全事故年均10起以下,而在2017年攻击事件达15起,经济损失达6.34亿美元,截止2018年8月,攻击事件达75件,单就EOS被盗或攻击事件在9月-10月两个月内,就达6余件,损失660万美元。显然,算上其他 加密货币 的攻击事件,2018年的攻击数量远超80件之多。如此看来,半年27亿美元这个数字也许还是比较乐观的说法。

而作出攻击行为的黑客们目前处于怎样一种状态呢?据初步统计,全球大约有30万人或者黑客组织在盯着 区块链 ,并且其中90%都尝试发起过大大小小的攻击。黑客们似乎已经成为 区块链 领域的常驻嘉宾。

熊市下黑客攻击事件或将加剧

有人说黑客攻击事件只在加密市场行情上行时才会发生,在目前熊市大行其道之时,无需过度忧虑黑客攻击事件。

事实上,熊市黑客的攻击不仅会发生,并且可能还会加剧。主要有以下几个原因:

一、对 加密货币 信仰者来说,熊市并不能让他们放弃投资 加密货币 ,反而会选择在熊市进行抄底,等待着牛市的到来。而地下黑客也像信仰者一样会觉得牛市会来;

二、地下黑客的工作模式是产业化的,只要 数字货币 相关项目方在,攻击就没理由停止,而 加密货币 的信仰者继续存在, 数字货币 相关项目方就不会消失,与此同时,攻击也会继续;

三、熊市当道,抱着”熊市不会被黑客攻击”这种想法的 数字货币 相关项目方可能会因为熊市而懈怠了安全体系建设。而一旦安全体系的建设松懈,与黑客而言,攻击将会变得更容易。

综上,熊市不仅不会阻挡黑客攻击的脚步,反而可能加剧攻击事件的发生。

高警惕 强应对

那么针对此种可能会加剧的黑客攻击事件,处在熊市中的币圈从业者们该如何对待呢?

对交易所来说,即使在熊市中也应该保持应有的警惕性。一方面长线培养安全领域人才,以解决人才短缺的难题;另一方面,部署网络安全防护系统,及时修补服务器操作系统、应用系统的安全漏洞,加强安全体系的建设,避免严重的大规模资金被盗事件发生。

对熊市中尚存的信仰者而言,在交出自己的个人数据和持币时,应充分了解可能存在的风险,在电脑端、手机端使用安全软件,避免掉进网络钓鱼陷阱,同时应该选择安全系数相对较高的冷钱包。虽然我们不知道黑客会用哪种方式攻击,但自身做好细节已经是最好的防备。

来源:九个亿财经

AD:

郑重声明:本文版权归原作者所有,转载文章仅为传播更多信息之目的,如作者信息标记有误,请第一时间联系我们修改或删除,多谢。

AVANT and Alert Logic Partner to Enable AVANT’s Growing Network of Trusted Advi ...

$
0
0
AVANT becomes first master agent to represent Alert Logic to the
Agent Channel community and will accelerate security channel sales
worldwide

CHICAGO (BUSINESS WIRE) AVANT Communications (“AVANT”), a master agent and leader in channel

sales enablement of next generation technology solutions, and Alert

Logic, the SIEMless Threat Management company, today announced a

partnership that will help businesses worldwide achieve the right level

of security and compliance coverage across any environment. This

partnership represents the first time Alert Logic is being represented

to the agent channel community through a master agent.


AVANT and Alert Logic Partner to Enable AVANT’s Growing Network of Trusted Advi ...

AVANT’s growing ecosystem of channel sales professionals, known as

Trusted Advisors, helps organizations navigate today’s fast-changing IT

landscape and make the right technology choices to solve today’s

business problems.

The partnership enables AVANT’s extensive network of Trusted Advisors to

resell the Alert Logic SIEMless Threat Management offering, which

seamlessly connects an award-winning security platform, cutting-edge

threat intelligence, and expert defenders to provide the best security

and peace of mind for businesses 24/7. The Alert Logic offering includes

Security

(SOC) experts, who monitor customers’ environments

24/7 and provide incident management with guidance on how to address

threats. With Alert Logic, organizations can increase their security and

compliance capabilities at a lower total cost than investing in multiple

point solutions or traditional security outsourcing.

“AVANT prides itself in leading the channel industry with the next wave

of disruptive companies changing IT consumption models. The partnership

with AVANT is the first of its kind for Alert Logic and will be directly

enabled through the agent channel community at a very critical time,

when the growing shortage of security talent is driving the highest

demand ever for managed security offerings,” said Ian Kieninger, CEO and

co-founder of AVANT. “Welcoming Alert Logic to our expanding portfolio

of security services will advance our mission to drive the agent

community into one of the fastest-growing sectors of the information

technology industry. This is going to drive sales for our network of

Trusted Advisors now and in the months and years to come.”

Partnering with leading technology distributors like AVANT is a core

growth strategy for Alert Logic as the company furthers its mission to

provide organizations with the right security and compliance coverage at

an optimal cost. AVANT’s fully-enabled and driven network of agents will

extend Alert Logic’s reach to help organizations address the evolving

threat landscape, expanding compliance risks and resource constraints.

“We partnered with AVANT because of their deep expertise in IT channel

enablement, with a strong Trusted Advisor community that understands how

to help organizations benefit from higher-value IT solutions,” said

Christopher Rajiah, Senior Vice President of Global Alliances and

Partnerships, Alert Logic. “AVANT is an exceptional partner to bring

Alert Logic to the Agent Channel community. This partnership will power

AVANT’s network of Trusted Advisors to help businesses navigate today’s

ever-changing threat landscape, while addressing compliance risks and

resource constraints. Together, we’re going to bring SIEMless Threat

Management to organizations worldwide.”

For more information on AVANT, please visit: www.goavant.net

For more information on Alert Logic, please visit: https://www.alertlogic.com/

About AVANT Communications

AVANT Communications is a channel sales enablement company and the

nation’s premiere distributor for next generation technologies. AVANT

adds unique value with its focus and expertise in channel sales

assistance, sales training, sales guidance, and sales tools to fuel IT

services business growth. From complex cloud designs to global wide-area

network deployments, AVANT sets the industry standard in enabling its

partners and clients to make intelligent decisions about services,

technology and cost-effective communications. For more information,

visit www.goavant.net ,

or connect on Twitter

and LinkedIn .

About Alert Logic

Alert Logic seamlessly connects an award-winning security platform,

cutting-edge threat intelligence, and expert defenders to provide the

best security and peace of mind for businesses 24/7, regardless of their

size or technology environment. More than 4,000 organizations rely on

Alert Logic SIEMless Threat Management to ensure the right level of

security and compliance coverage at a lower total cost than point

solutions, SIEM tools, or traditional security outsourcing vendors.

Founded in 2002, Alert Logic is headquartered in Houston, Texas, with

offices in Austin, Seattle, Dallas, Cardiff, Belfast, London and Cali,

Colombia. For more information, visit www.alertlogic.com .

Contacts

MEDIA CONTACT FOR AVANT:

Rosie Gillam

AvantPR@walkersands.com

(312)

561-2497

MEDIA CONTACT FOR ALERT LOGIC:

Christine Blake

Christine@w2comm.com

(703)

877-8114


AVANT and Alert Logic Partner to Enable AVANT’s Growing Network of Trusted Advi ...
Do you think you can beat this Sweet post? If so, you may have what it takes to become a Sweetcode contributor...Learn More.

Datadog and Aqua Security Partner to Provide Seamless Visibility into Container- ...

$
0
0

Aqua integrates with Datadog to give DevOps teams real-time security metrics and events

Seattle, WA December 11, 2018 KubeCon/CloudNativeCon Aqua Security, the leading platform provider for securing container-based and cloud native applications, today announced

the integration of its platform with Datadog’s cloud monitoring and analytics platform. With this integration, Aqua provides real-time visibility into the security posture of cloud native applications to Datadog users, including information on vulnerable images, untrusted running containers, and security anomalies found by Aqua in the runtime environment.

For DevOps teams that continuously monitor applications for operational parameters such as performance, bug tracking, and errors, security events are often a blind spot that is handled

elsewhere although they may directly affect application uptime and resiliency. The integration of Aqua’s granular security information into Datadog’s comprehensive monitoring makes it possible to identify issues quickly and analyze their impact on application availability.

As organizations shift to more dynamic infrastructure through cloud and container technologies, communication between application and security teams is more important than ever,” said Ilan Rabinovitch, VP Product and Community at Datadog. “By combining Datadog’s deep insights into containerized application performance with Aqua Security’s enforcement of

security best practices, we are helping organizations bridge the gap between these traditionally siloed teams.

The integration between Datadog and Aqua CSP features pre-built Datadog dashboards that display:

Container images currently in Aqua’s scan queue

Known vulnerabilities and security issues found in existing images

Containers running from unauthorized images

Aqua runtime policy violations and audit events

Additionally, Datadog users can use the data provided in the Aqua dashboards to set up theirown alerts, aggregate data streams from different applications, and customize how data is

displayed.

“We are excited to be partnering with Datadog to deliver a more complete security view to DevOps teams,” said Amir Jerbi, CTO and co-founder of Aqua Security. “In the cloud native era, ensuring security can no longer the exclusive burden of security teams, and instead should be part of the overall operational soundness of applications throughout their lifecycle. Our integration with Datadog creates a valuable shortcut that allows security issues to be detected early and fixed quickly, preventing escalated security incidents in production.”

About Aqua Security

Aqua Security enables enterprises to secure their container and cloud-native applications from development to production, accelerating application deployment and bridging the gap between DevOps and IT security. Aqua’s Container Security Platform provides full visibility into container activity, allowing organizations to detect and prevent suspicious activity and attacks in real

time. Integrated with container lifecycle and orchestration tools, the Aqua platform provides transparent, automated security while helping to enforce policy and simplify regulatory

compliance. Aqua was founded in 2015 and is backed by Lightspeed Venture Partners, Microsoft Ventures, TLV Partners, and IT security leaders, and is based in Israel and Boston, MA.

For more information, visit www.aquasec.com or follow us on twitter.com/AquaSecTeam

About Datadog

Datadog is a monitoring service for hybrid cloud applications, assisting organizations in improving agility, increasing efficiency, and providing end-to-end visibility across the application and organization. These capabilities are provided on a SaaS-based data analytics platform that enables Dev, Ops and other teams to accelerate go-to-market efforts, ensure application uptime, and successfully complete digital transformation initiatives. Since launching in 2010, Datadog has been adopted by more than 9,000 enterprises including companies like Activision,

AT&T, Deloitte, Peloton, Samsung, Seamless, The Washington Post, T-Mobile, Turner Broadcasting, and Whole Foods.

Sponsored Content

Featured eBook


Datadog and Aqua Security Partner to Provide Seamless Visibility into Container- ...

Automation: Modernizing the Mainframe for DevOps

Most of us have always lived in a world where Mainframes did the bulk of the data processing. Introduced for commercial use in the 1950s, Mainframes have seemingly been around to do the heavy lifting. Even IBM’s “New” z Series is nearly two decades old (though, of course, the technology ...Read More


OnePlus 6T sees 249% sales boost from T-Mobile, CEO talks 5G, TV, security, smal ...

$
0
0

No longer the startup it once was, OnePlus has seen massive growth over the past few years. Now, with the help of its first US carrier launch, the OnePlus 6T is seeing a huge sales boost as revealed in a new interview.


OnePlus 6T sees 249% sales boost from T-Mobile, CEO talks 5G, TV, security, smal ...
The best gifts for Android users

Speaking with PCMag , CEO Pete Lau discussed a handful of interesting topics about OnePlus, and that kicked off with details about its T-Mobile partnership . Apparently, the OnePlus 6T has seen a 249% boost in sales in the United States compared to the OnePlus 6, and that’s attributed to the T-Mobile partnership. Specific numbers aren’t noted, but we do know that the OnePlus 6 sold 1 million units in just 22 days earlier this year, but that was a global total.

Lau further discussed how that sales boost is still occurring even though the OnePlus 6T lacks a headphone jack. He noted that “it was a very painful decision, but we can’t satisfy everyone.” That discussion led to talk about the wish that OnePlus would produce a smaller smartphone at some point. Lau cites battery life, though, as a barrier for this. He explains:

If we can solve the battery problem, we would definitely make a smaller one.I see a lot of demand for this kind of size. But looking at the industry, the technology of batteries hasn’t changed too much over all these years.

Personally, I don’t quite see the problem here, however. Other smaller devices such as Google’s 5.5-inch Pixel 3 have a far smaller footprint compared to the OnePlus 6T, but still manages fine battery life. Hopefully, OnePlus can get over whatever hurdles are holding it back with a smaller device.

The interview goes on to talk more about OnePlus’ 5G smartphone. Lau reiterated the plans to launch a 5G phone with EE in Europe , and says that in the US, the company is more likely to work with T-Mobile or Sprint for 5G devices. This is due to the fact that those carriers use a frequency below 6GHz which is apparently easier to build for.

Lau also interestingly mentions that OnePlus is looking to up its security game in the world of 5G. UsingBlackBerry and Apple as examples, Lau shares that the company is currently “auditioning security partners.”

Lastly, this interview offers a mention of the upcoming OnePlus TV. The company previously announced its plans for this project earlier this year. In the interview, Lau talks about how the company wants to create a “burdenless” experience and that, currently, there’s no specific timeline in place for launch.

9 Core & Specialty AWS Security Certifications

$
0
0

Cloud computing has become a necessity for almost all businesses. Given this reality, there is a significant need to design, develop, deploy, manage, and secure workloads in the cloud.

AWS offers a multitude of certifications , and having relevant certifications is an important way you can demonstrate cloud credibility and competence as an individual and how your organization can demonstrate value to its customers.

With that in mind, here’s a list of nine key AWS Security Certifications to consider. Whether you’re just starting to build your cloud credentials, looking to expand your skills and expertise in a particular area, or want to deepen your expertise, there should be something to match your needs among these industry-recognized certifications.

1. AWS Certified Cloud Practitioner

AWS Certified Cloud Practitioner is an introductory certification specifically created to demonstrate an individual’s overall understanding of the AWS Cloud. This examination is a recommended start to achieving Specialty certification or an optional start toward Associate certification.

What Are the Requirements for the Exam?

Prerequisites: The candidate must have at least six months’ of cloud experience with a knowledge of the fundamental architectural principles of AWS, use cases for AWS services, as well as AWS deployment and operating principles and security models.

Details: Candidates have 90 minutes to complete the exam, which costs $100.

Who Should Obtain This Certification?

Individuals seeking to have their knowledge validated for an overall understanding of the AWS Cloud.

2. AWS Certified Developer Associate

The AWS Certified Developer Associate exam deals with the development and maintenance of applications using AWS.

What Are the Requirements for the Exam?

Prerequisites: The candidate must have more than one years’ experience programming and writing code for AWS software and applications. Additionally, the candidate must have knowledge of best practices for using AWS workflow, notification, and database services, along with expertise in design, development, and management of AWS-based applications.

Details: Candidates have 80 minutes to complete the exam, which costs $150.

Who Should Obtain This Certification?

Individuals who want to have their skills validated for developing and managing applications on the AWS Cloud.

3. AWS Certified SysOps Administrator Associate

AWS Certified SysOps Administrator Associate deals with system administration, specifically, expertise in deployment, management, and operations on the AWS platform.

What Are the Requirements for the Exam?

Prerequisites: The candidate must have more than one year of experience operating AWS-based applications. Additionally, the candidate must have knowledge of managing data centers, AWS services, and security systems.

Details: Candidates have 80 minutes to complete the exam, which costs $150.

Who Should Obtain This Certification?

Individuals seeking to have their skills and knowledge validated for implementation, migration, and operation of applications based on the AWS platform.

4. AWS Solutions Architect Associate

The AWS Solutions Architect Associate certification deals with how to architect and deploy secure and robust applications on AWS technologies.

What Are the Requirements for the Exam?

Prerequisites: The candidate must have one year of experience designing scalable, efficient, and fault-proof systems on AWS. Additionally, the candidate must have knowledge of network technologies, client interfaces, security systems, and their integration on the AWS platform.

Details: Candidates have 130 minutes to complete the exam, which costs $150.

Who Should Obtain This Certification?

Individuals seeking to have their skills and knowledge validated in designing and deploying robust and secure applications for AWS.

5. AWS Certified DevOps Engineer Professional

The AWS Certified DevOps Engineer Professional exam deals with provisioning, operating, and managing distributed application systems on the AWS platform.

What Are the Requirements for the Exam?

Prerequisites: Either AWS Certified Developer Associate or AWS Certified SysOps Administrator Associate.

The candidate must have two or more years’ experience provisioning, operating, and managing AWS environments. Additionally, the candidate must have knowledge of designing and managing tools to automate production operations.

Details: Candidates have 170 minutes to complete the exam, which costs $300.

Who Should Obtain This Certification?

Individuals seeking to have their developer and system operational skills validated on a professional level.

6. AWS Certified Solutions Architect Professional

The AWS Certified Solutions Architect Professional exam validates advanced technical skills and experience centering on designing and deploying dynamically scalable, highly available, fault-tolerant, and reliable applications on AWS.

What Are the Requirements for the Exam?

Prerequisites: AWS Certified Solutions Architect Associate

The candidate must have two or more years’ experience in the best practices of architectural design and deploying cloud architecture on AWS. Additionally, the candidate must have knowledge of cost optimization strategies, selection of appropriate AWS Services, and migration of complex application systems on AWS.

Details: Candidates have 170 minutes to complete the exam, which costs $300.

Who Should Obtain This Certification?

Individuals seeking to have their advanced architectural, design, and development skills validated on a professional level.

7. AWS Certified Big Data Specialty

The AWS Certified Big Data Specialty certification deals with analyzing data and the skills required to extract data by implementing AWS Services.

What Are the Requirements for the Exam?

Prerequisite: Any AWS Associate Level Certification.

The candidate must have a minimum of five years’ experience in AWS tools for data analysis. Additionally, the candidate must have knowledge of the design and maintenance of Big Data along with best practices for securing Big Data solutions.

Details: Candidates have 170 minutes to complete the exam, which costs $300.

Who Should Obtain This Certification?

Individuals seeking to validate their skills in Big Data and networking.

8. AWS Certified Advanced Networking Specialty

The AWS Certified Advanced Networking Specialty certification validates advanced technical skills and experience in designing and implementing AWS and hybrid IT network architectures at scale.

What Are the Requirements for the Exam?

Prerequisite: Any AWS Associate Level Certification.

The candidate must have a minimum of five years’ experience in the design and implementation of network solutions. Additionally, the candidate must have skills in AWS networking concepts as well as architecting, developing, and deploying AWS-based cloud solutions. Knowledge of security implementation and network optimization is also a must.

Details: Candidates have 170 minutes to complete the exam, which costs $300.

Who Should Obtain This Certification?

Individuals seeking to validate their network architecture skills for all AWS Services.

9. AWS Certified Security Specialty

The AWS Certified Security Specialty certification enables experienced security professionals to demonstrate their knowledge of and ability to secure the AWS platform.

What Are the Requirements for the Exam?

Prerequisite: T he exam is open to anyone who currently holds a Cloud Practitioner or Associate-level certification.

Candidates require a minimum of five years’ of IT security experience designing and implementing security solutions, a knowledge of security controls for workloads on AWS, and at least two years’ of direct practical experience securing AWS workloads.

Details: Candidates have 170 minutes to complete the exam, which costs $300.

Who Should Obtain This Certification?

This certification is suited to anyone who wants to validate their AWS knowledge across a range of security topics including risk assessment, infrastructure security, data protection and encryption, identity and access management, incident response, and logging and monitoring.

Wrapping Up . . .

AWS certifications can help you build foundational and advanced knowledge in a broad range of best practices for cloud technology while demonstrating that your skills have been validated by one of the most well-known organizations in cloud computing.

Becoming AWS certified can help advance your career and connect you with businesses seeking skilled cloud professionals. And collectively, AWS certifications help to demonstrate an organization’s knowledge, capabilities, and commitment to its prospects and customers.

If you have opinions or recommendations about AWS certifications, let us know. And if you’re interested in a less formal way of keeping up on all things Security, Dev, and Ops, subscribe to our blog and tune in to our brand new podcast ― Your System Called .

*** This is a Security Bloggers Network syndicated blog from Blog Threat Stack authored by Alan Nakashian-Holsberg . Read the original post at: https://www.threatstack.com/blog/9-core-specialty-aws-security-certifications

How Well Is Your Organization Investing Its Cybersecurity Dollars?

$
0
0

The principles, methods, and tools for performing good risk measurement already exist and are being used successfully by organizations today. They take some effort -- and are totally worth it.

There's an old saying in marketing: "Half of your marketing dollars are wasted. You just don't know which half." This has become far less true in recent years for organizations that apply rigorous quantitative marketing analysis techniques.

Unfortunately, given common practices in cybersecurity today, you could update that old saying by substituting "marketing" with "cybersecurity" and have to wonder if it isn't accurate. At the very least, you'd have to decide how you'd defend that it isn't. For example, if I asked what the most valuable cybersecurity investment has been for your organization in the past three years, how would you answer?

How Do We Define Cybersecurity Value?

You can't reliably measure what you haven't clearly defined, so before we can have an intelligent conversation about cybersecurity value, we first have to clearly define what we mean. For this, I turn to the question I've heard executives ask many times over the years: "How much less risk will we have if we spend these dollars on cybersecurity?" Clearly, from their perspective (and it's their perspective that matters) cybersecurity value should be measured in how much less risk the organization faces.

Unfortunately, what I commonly see in board reports, budget justifications, and conference presentations is something different. Most of the time, as an industry we appear to lean on implicit proxies for measuring risk reduction ― things like NIST CSF (National Institute of Standards and Technology Cyber Security Framework) benchmark improvements, credit-like scores, and higher compliance ratings. Don't get me wrong; these are useful directional references that generally mean an organization has less risk. The problem is that we don't know how much less risk, and the "how much" matters.

For example, if the overall NIS CSF score for your organization went from 2.5 to 2.9 last year, what does that 0.4 improvement mean in terms of risk reduction? Along the same lines, how much less risk comes from reducing the time to patch or shortening the time to detect a breach?

Measuring Risk Reduction

Everything we do in cybersecurity in some way affects, directly or indirectly, the probable frequency and/or magnitude of loss-event scenarios. That being the case, measuring the value of our efforts begins with clearly defining the loss-event scenarios we're trying to affect. At a superficial level, this often boils down to confidentiality breaches, availability outages, and compromises of data integrity. That level of abstraction isn't usually very useful in risk measurement though, so we need to be more specific.

A more reasonable level of specificity would include, for example, a confidentiality breach of which information, by which threat community, via which vector. At this level of abstraction, you can begin to evaluate the effect of cybersecurity controls on the frequency and magnitude of loss for that scenario.

If that sounds like more work than you're used to applying in risk measurement, it's not surprising. Most of what passes for risk measurement today is nothing more than someone proclaiming high/medium/low risk.

Value Analysis

To drive my point home, let me share a high-level example from my past as a CISO. The organization I worked for had huge databases containing millions of consumer credit card records. The Payment Card Industry standard called for data at rest encryption (DaRE), which at the time would have cost the organization well over a million dollars, required modifications to key applications, and taken over a year and a half to implement.

Rather than simply go to my executives with an expensive compliance problem, I took a couple of days to do the following:

Identify which loss-event scenarios DaRE was relevant to as a control. Perform a quantitative risk analysis using Factor Analysis of Information Risk (FAIR) to determine how much risk we currently faced from these scenarios. Perform a second analysis that estimated the reduction in risk if we implemented DaRE. Identify a set of alternative controls that were also relevant to the same loss-event scenarios. (These controls cost a fraction as much as DaRE, didn't require application changes, and could be implemented in a few months.) Perform a third analysis that estimated the reduction in risk if we implemented these alternative controls (which turned out to be a greater reduction in risk than DaRE).

The upshot is that I was able to go to my executives and the PCI auditor with options that included clearly described cost-benefit analyses. From their perspective, it was a no-brainer.

By not simply telling my executives that we had to bite the compliance bullet, the organization was able to save over a million dollars, avoid significant operational disruption, and reduce more risk in a shorter time frame.

The Bottom Line

Every dollar spent on cybersecurity is a dollar that can't be spent on the many other business imperatives with which an organization must deal. For this reason (and because we have an inherent obligation to be good stewards of our resources), we must be able to effectively measure and communicate the value proposition of our cybersecurity efforts.

Fortunately, the principles, methods, and tools for performing good risk measurement already exist and are being used successfully by organizations today. Do these analyses take more effort than proclaiming high/medium/low risk, or falling back on ambiguous metrics? Absolutely. Is the extra effort worthwhile? I'll answer based on my experience as a CISO ― yes. It's not even close.

Fundamentals of Security: What is SSH?

$
0
0

We’re drawing on our security knowledge to provide a series on the fundamentals of securing devices and networks. The previous item in our series was an introduction to why asset management is important for securing networks. In this segment, we introduce SSH and remote servers.


SSH isn’t perfect though, and it depends on users to keep its servers secure.

SSH servers are found in everything from linux servers, to VOIP phones, to security cameras. Very often, SSH servers are installed to allow administrators to access a remote shell, from which they can update software and change system settings.


Fundamentals of Security: What is SSH?

SSH servers are designed to be secure, but users play a role in maintaining that security. The most common problem arises when a user’s password is easily guessed or cracked. For example, using the password, “password” creates an opening for an intruder. The user’s traffic would still be encrypted, but a malicious actor could sign in as the user and change settings or view sensitive data. From this vantage point, the attacker could hop from machine to machine. For more information, read our post on lateral attacks .

The other problem that arises is when vulnerabilities are found in SSH servers. Going back to the browser analogy, let’s say this blog post is stored on a server somewhere in the world. While you didn’t have to input a password to get to this content, you would need special access to edit it. If the server containing this blog post were running an unpatched SSH server open to the public, an attacker could connect in and maliciously change the text. An edited blog post isn’t terrible, but imagine if that server were in a critical position and responsible for something like keeping time .

In October, libssh patched a vulnerability that allowed an attacker to successfully authenticate without providing credentials. This means that any SSH server that uses libssh needs to be updated, or attackers will continue to be able to log into that server. Keeping SSH servers updated isn’t trivial. Administrators are often dependant on manufacturers provide the patch, which takes time. After a patch is released, administrators need downtime on the machine in order to update it. Unfortunately, downtime is often impractical, and machines continue to run vulnerable servers.


Fundamentals of Security: What is SSH?

If you’d like to use SSH, you’ll need an SSH client. On computers running OSX (Macs) and Linux, you already have one installed. You can open up a terminal window and type ‘ssh ‘ followed by ‘username@<server-ip-address>’ to log into an SSH server. If you’re running windows, you’ll likely need to install an application like PuTTY .

Try SSHing into your home networking devices and find out if they respond. Imagine how many computers, devices, and machines at work have SSH servers running. Then get out there and secure them.

Come back soon for the next item in our series and, in the meantime, check out our write up on

implants and supply chains

.

Damning Report on Equifax Security Failures is a Lesson for all Enterprises

$
0
0

Damning Report on Equifax Security Failures is a Lesson for all Enterprises
Damning Report on Equifax Security Failures is a Lesson for all Enterprises
Add to favorites

No clear lines of authority, complex and “antiquated” custom IT infrastructure…

Equifax allowed over 300 security certificates to expire, including 79 certificates for monitoring business critical domains, prior to a data breach that exposed the personal data of over 143 million people, including 15.2 million UK records.

That’s according to a new report from the US House of Representatives’ Oversight Committee. It gave short shrift to the company’s argument that one IT technician failing to patch was to blame for the breach, which saw hackers exploit avulnerability in Apache’s Struts system to steal the personal data of half America’s population.

The 96-page report [pdf] is a salutary lesson in how a major breach happened and two points of failure may sound eerily familiar warnings to many enterprises.
Damning Report on Equifax Security Failures is a Lesson for all Enterprises
Equifax Security Failure: Lack of Accountability and IT Complexity Blamed

As the report notes: “ Firstly , a lack of accountability and no clear lines of authority in Equifax’s IT management structure existed, leading to an execution gap between IT policy development and operation. This also restricted the company’s implementation of other security initiatives in a comprehensive and timely manner.”

“Secondly,Equifax’s aggressive growth strategy and accumulation of data resulted in a complex IT environment. Equifax ran a number of its most critical IT applications on custom-built legacy systems. Both the complexity and antiquated nature of Equifax’s IT systems made IT security especially challenging.”

How the Hack Happened

The report is also a compelling insight into how hack occurred.During the attack, which began in May 2017 and which lasted for 76 days. the attackers dropped web shells (a web-based backdoor) to obtain remote control over Equifax’s network. They found a file containing unencrypted credentials usernames and passwords), enabling the attackers to access sensitive data outside of the ACIS environment. The attackers were able to use these credentials to access 48 unrelated databases.

See also: Equifax Dodges GDPR Bullet as ICO Fines it 500,000 Via 1998 Data Act

The report notes: “Attackers sent 9,000 queries on these 48 databases, successfully locating unencrypted personally identifiable information (PII) data 265 times. The attackers transferred this data out of the Equifax environment, unbeknownst to Equifax. Equifax did not see the data exfiltration because the device used to monitor ACIS network traffic had been inactive for 19 months due to an expired security certificate.”

Another learning point: 67 of Equifax’s self hosted webapps can’t have generated any IDS alerts for almost two years due to expired SSL inspection certs. If you aren’t getting any IDS alerts, you need a process to detect, and proritise remediation.

― :santa|type_6: Kevin Beaumont :santa|type_4: (@GossiTheDog) December 11, 2018

Chris Wallis, founder of UK-based security monitoring provider Intruder , told Computer Business Review: “An outsourced approach may have helped in this case, allowing externalteams of experts to properly configure tools capable of detecting the weaknesses on the perimeter, while the internal teams focused on the detection and response capabilities. Equally, better asset management and modern cloud deployment techniques could have helped the security team know where to aim theirscans.”

He added: “What’s also amazing is the time between the vulnerabilitybeing announced, and exploits occurring. This is a trend that we alsosaw with the Drupal vulnerabilities this year the time between vulnerabilities being announced and hackers exploiting them aredays, not months. This raises questions about how many companies secure themselves, and in fact why the PCI Data Security Standard is still only mandating quarterly vulnerabilityscans. If this doesn’t change soon, we’re likely to see our credit card data going the same way as our credit reference data.”

Identity Needs to be “More Dynamic”
Damning Report on Equifax Security Failures is a Lesson for all Enterprises
Chris Morales, head of security analytics at Vectra, told Computer Business Review in an emailed statement: “The best data protection strategy is to not have the data… The definition of identity needs to be more dynamic. A person would better be identified based on biometrics and behaviour, not just SSN (and driver’s license or any other type of simple digit-based identifier). What is needed is modernisation of back end systems to support new authentication techniques that would better serve as a personal identity.

He added: “As for preventing the breach, I don’t believe prevention will ever be 100%. That is unrealistic. I bring this up because the report states the breach was entirely preventable. I don’t believe that to be true. It is a classic could have should have scenario. All networks have become highly complex and the failure comes down to people and process, not necessarily technology. As long as a motive exists, attackers will continuously attempt to compromise networks until they succeed. It is the same notion as building a wall would stop the drug trade. The criminal build tunnels instead.”

“What I do believe is we can improve our ability to detect and respond when a breach occurs by looking for the type of behaviours an attacker would perform and correlating those in real time to alert on the most critical of actions before they become a problem to reduce the impact. We have to get faster at detecting the attacks that will and do happen.”

Equifax has since employed award-winning CISO Jamil Farschi to shake up its security. He was awarded CISO of the year last month by publication CIODive.

Viewing all 12749 articles
Browse latest View live