Quantcast
Channel: CodeSection,代码区,网络安全 - CodeSec
Viewing all 12749 articles
Browse latest View live

数据加密:RSA 加解密

$
0
0

对于RSA加解密来说,在iOS的API中同样也是提供了这两种形式的方法。

SecKeyEncrypt(加密) SecKeyDecrypt(解密) 复制代码

openssl 同样也提供了一系列的方法:

RSA_public_encrypt RSA_private_encrypt RSA_public_decrypt RSA_private_decrypt 复制代码

相比较而言,openssl 提供的方法更为明确,比如:公钥解密,私钥解密,私钥加密,公钥解密。虽然 iOS 原生给出的只是加密和解密的方法,但是在方法注释中明确说了,加密用的就是公钥,解密用的就是私钥。

其实公钥加密私钥解密也是最常用的方式,私钥加密公钥解密用的并不多,但是私钥加密公钥解密有的时候也是需要的。如果真的需要私钥加密公钥解密,openssl 会更方便一点,但其实 iOS 也可以做私钥加密公钥解密。

这里大致说一下RSA加解密的过程:

1.生成密钥

公钥 (E,N) 私钥 (E,D,N) 复制代码

2.加解密

密文 = 明文<sup>E</sup> % N 明文 = 密文<sup>D</sup> % N

我们通过一个具体的例子来直观体验下,经过计算我们现在得到一对具体的密钥对:

公钥=(E,N) = (5,323) 私钥=(D,N) = (29,323) B = A<sup>E</sup> mod N = pow(123, 5) % 323 = 225 A = B<sup>D</sup> mod N = pow(225, 29) % 323 = 123

如果 A(123) 为明文,那上面的过程就是 公钥加密私钥解密;

如果 B(225) 为明文,那上面的过程就是 私钥加密公钥解密;

换一下顺序可能会更清除一点:

A = B<sup>D</sup> mod N = pow(225, 29) % 323 = 123 (私钥加密) B = A<sup>E</sup> mod N = pow(123, 5) % 323 = 225 (公钥解密)

这样一来我们就会发现,其实加解密是同一个方法。那为什么会有加密和解密两个方法呢?我的理解是:

加密就是,传入数据直接做计算(就像上面的那样)

解密就是,传入数据直接做计算(还是上面的那样),不过会根据填充模式做数据处理,把填充的随机数剔除掉。

所以从原理上来说私钥加密公钥解密是行的通的,只是需要自己做一些数据上的处理。具体实现可以看 Demo 。

2、分段加密

RSA算法本身要求加密内容也就是明文长度 m 必须 0<m<n ,也就是说内容这个大整数不能超过 n,否则就出错。那么如果 m=0,RSA加密器会直接返回全0结果。所以在对较长的数据进行加密的时候要把数据分段,每一段的数据长度不能大于模数长度(密钥长度)。

在实际的 RSA 加密中,分段的长度跟填充模式也有一定的关系:

填充方式 最大输入长度 输出长度 填充内容 PKCS1 keySize - 11 keySize 随机数 NONE keySize - 1 keySize 00

有的文章说 padding 为 NONE 是的最大输入长度为 keySize,其实这样是有风险的。如果明文长度跟密钥长度一样的话,明文就有可能大于模数,这样在加密的时候就会出错。所以这里建议 padding 为 NONE 是明文的分段长度取 keySize - 1 。

分段加密之后就要分段解密了,在实际的RSA加密中,加密出来的密文总是等于密钥的长度,所以在分段解密的时候密文的分段大小直接取密钥长度。

3、填充模式

RSA在实际应用为了提高安全性防范各种攻击,在加解密过程中都需要添加一定的随机因素。为了让同一明文每次加密产生的密文都不一样,加密前先填充一部分随机数,这个不止RSA有,DES等对称加密也都有,称为padding。加密标准里有各种类型的padding标准,比如PCKS1。

对于PKCS1,这个填充格式会要求每次加密的数据比密钥长度短至少11个字节(keySize - 11),填充格式如下:

PS 为随机填充数,M为明文 00 02 | PS | 00 | M (公钥加密) 00 01 | PS | 00 | M (私钥加密) 复制代码

以 00 开头填充同时也保证了待加密数据不会大于密钥的模数。

还有一个比较常用的就是None(不填充),如果明文比密钥短的话会在明文的前面填充零(0)

0000 | M 复制代码

Demo传送门


运行时动态的开关 Spring Security 原 荐

$
0
0
1. 为什么要在运行时动态的开关 Spring Security?

考虑这样一个场景,当我们构建了一整套微服务架构的系统后,公司某个内部的老系统也感受到了微服务架构的好处,包括实时监控,限流,熔断,高可用的机制等等,老系统的开发人员也希望能减少自己的一些工作量,所以他们系统将老系统加入到我们的微服务架构体系中来。这样就产生了一些适配,兼容性问题,如果让老系统来完全适配已经构建好的微服务架构体系那么老系统改动的代价就比较大,包括技术的升级,开发人员的学习成本提高,测试问题,还有老系统还有一些不断的新需求要开发。比较理想的解决方案是对老系统的改动越小越好,最好能做到无缝集成,已经构建好的微服务架构来为老系统的集成提供支持。比如说老系统原本有自己的认证,授权控制,使用了 Spring Security ,在微服务架构中我们将认证,授权的工作统一放在了 API 网关层去处理。这样就和老系统的集成产生了冲突。于是我就需要让 API 网关路由到老系统上的请求不经过老系统自身的认证、授权流程,也可以正常访问。同时也不能破坏当不通过 API 网关访问时老系统的认证、授权流程也要能正常工作。所以这是我要达到的目的。

2. Spring Security 在 Web 项目中是如何工作的

这是我在网上找的一张图,目的就是为了大概说明问题。 Spring WebSecurity 的核心功能都是在这一条过滤器链上完成的。具体可以参考这个类 :

org.springframework.security.config.annotation.web.builders.FilterComparator , 这个类中定义了所有的 Spring Security 的过滤器以及他们的顺序。

搞清楚这一点,我就有一个想法,既然我想要关闭 Spring Security 不让他起作用 ,那我不让请求经过这些过滤器不就可以了么。


运行时动态的开关 Spring Security 原 荐

FilterComparator 源码 :

private static final int STEP = 100;
private Map<String, Integer> filterToOrder = new HashMap<String, Integer>();
FilterComparator() {
int order = 100;
put(ChannelProcessingFilter.class, order);
order += STEP;
put(ConcurrentSessionFilter.class, order);
order += STEP;
put(WebAsyncManagerIntegrationFilter.class, order);
order += STEP;
put(SecurityContextPersistenceFilter.class, order);
order += STEP;
put(HeaderWriterFilter.class, order);
order += STEP;
put(CorsFilter.class, order);
order += STEP;
put(CsrfFilter.class, order);
order += STEP;
put(LogoutFilter.class, order);
order += STEP;
put(X509AuthenticationFilter.class, order);
order += STEP;
put(AbstractPreAuthenticatedProcessingFilter.class, order);
order += STEP;
filterToOrder.put("org.springframework.security.cas.web.CasAuthenticationFilter",
order);
order += STEP;
put(UsernamePasswordAuthenticationFilter.class, order);
order += STEP;
put(ConcurrentSessionFilter.class, order);
order += STEP;
filterToOrder.put(
"org.springframework.security.openid.OpenIDAuthenticationFilter", order);
order += STEP;
put(DefaultLoginPageGeneratingFilter.class, order);
order += STEP;
put(ConcurrentSessionFilter.class, order);
order += STEP;
put(DigestAuthenticationFilter.class, order);
order += STEP;
put(BasicAuthenticationFilter.class, order);
order += STEP;
put(RequestCacheAwareFilter.class, order);
order += STEP;
put(SecurityContextHolderAwareRequestFilter.class, order);
order += STEP;
put(JaasApiIntegrationFilter.class, order);
order += STEP;
put(RememberMeAuthenticationFilter.class, order);
order += STEP;
put(AnonymousAuthenticationFilter.class, order);
order += STEP;
put(SessionManagementFilter.class, order);
order += STEP;
put(ExceptionTranslationFilter.class, order);
order += STEP;
put(FilterSecurityInterceptor.class, order);
order += STEP;
put(SwitchUserFilter.class, order);
} 3. Spring Security 的过滤器链是如何工作的 3.1 Spring Security 过滤器链是什么 ?

通过 debug 调试 可以发现在 Spring Security 提供的过滤器中使用的 FilterChain 的实际类型是这个类 :org.springframework.security.web.FilterChainProxy.VirtualFilterChain 。它实现了 FilterChain 接口。

3.2 Spring Security 过滤器链初始化

通过搜索可以找到过滤器链条是在这个函数中进行初始化的 :org.springframework.security.config.annotation.web.builders.WebSecurity#performBuild 源码:

@Override
protected Filter performBuild() throws Exception {
Assert.state(
!securityFilterChainBuilders.isEmpty(),
"At least one SecurityBuilder<? extends SecurityFilterChain> needs to be specified. Typically this done by adding a @Configuration that extends WebSecurityConfigurerAdapter. More advanced users can invoke "
+ WebSecurity.class.getSimpleName()
+ ".addSecurityFilterChainBuilder directly");
int chainSize = ignoredRequests.size() + securityFilterChainBuilders.size();
List<SecurityFilterChain> securityFilterChains = new ArrayList<SecurityFilterChain>(
chainSize);
for (RequestMatcher ignoredRequest : ignoredRequests) {
securityFilterChains.add(new DefaultSecurityFilterChain(ignoredRequest));
}
for (SecurityBuilder<? extends SecurityFilterChain> securityFilterChainBuilder : securityFilterChainBuilders) {
securityFilterChains.add(securityFilterChainBuilder.build());
}
FilterChainProxy filterChainProxy = new FilterChainProxy(securityFilterChains);
if (httpFirewall != null) {
filterChainProxy.setFirewall(httpFirewall);
}
filterChainProxy.afterPropertiesSet();
Filter result = filterChainProxy;
if (debugEnabled) {
logger.warn("\n\n"
+ "********************************************************************\n"
+ "********** Security debugging is enabled. *************\n"
+ "********** This may include sensitive information. *************\n"
+ "********** Do not use in a production system! *************\n"
+ "********************************************************************\n\n");
result = new DebugFilter(filterChainProxy);
}
postBuildAction.run();
return result;
}

org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration#springSecurityFilterChain 源码:FilterChainProxy 本身也是一个过滤器这个过滤器会被注册到过滤器链上。然后这个过滤器内部封装了 Spring Security 的过滤器链条。

@Bean(name = AbstractSecurityWebApplicationInitializer.DEFAULT_FILTER_NAME)
public Filter springSecurityFilterChain() throws Exception {
boolean hasConfigurers = webSecurityConfigurers != null
&& !webSecurityConfigurers.isEmpty();
if (!hasConfigurers) {
WebSecurityConfigurerAdapter adapter = objectObjectPostProcessor
.postProcess(new WebSecurityConfigurerAdapter() {
});
webSecurity.apply(adapter);
}
return webSecurity.build();
} 3.3 Spring Security 过滤器链的工作过程

org.springframework.security.web.FilterChainProxy#doFilter 源码 :

public void doFilter(ServletRequest request, ServletResponse response,
FilterChain chain) throws IOException, ServletException {
boolean clearContext = request.getAttribute(FILTER_APPLIED) == null;
if (clearContext) {
try {
request.setAttribute(FILTER_APPLIED, Boolean.TRUE);
doFilterInternal(request, response, chain);
}
finally {
SecurityContextHolder.clearContext();
request.removeAttribute(FILTER_APPLIED);
}
}
else {
doFilterInternal(request, response, chain);
}
}

org.springframework.security.web.FilterChainProxy#doFilterInternal 源码 : 这个函数是 Spring Security 过滤器链条的执行入口。每次请求都会 new 一个VirtualFilterChain 的实例对象,然后调用该对象的 doFilter 函数,于是请求就进入到 Spring Security 的过滤器链处理中。

private void doFilterInternal(ServletRequest request, ServletResponse response,
FilterChain chain) throws IOException, ServletException {
FirewalledRequest fwRequest = firewall
.getFirewalledRequest((HttpServletRequest) request);
HttpServletResponse fwResponse = firewall
.getFirewalledResponse((HttpServletResponse) response);
List<Filter> filters = getFilters(fwRequest);
if (filters == null || filters.size() == 0) {
if (logger.isDebugEnabled()) {
logger.debug(UrlUtils.buildRequestUrl(fwRequest)
+ (filters == null ? " has no matching filters"
: " has an empty filter list"));
}
fwRequest.reset();
chain.doFilter(fwRequest, fwResponse);
return;
}
VirtualFilterChain vfc = new VirtualFilterChain(fwRequest, chain, filters);
vfc.doFilter(fwRequest, fwResponse);
} org.springframework.security.web.FilterChainProxy.VirtualFilterChain#doFilter 源码 :这里就是去挨个调用 Spring Security 的过滤器的过程 ,重点需要关注的是originalChain (原始的过滤器链条也就是 servlet 容器的) ,currentPosition (spring security 过滤器链当前执行到的位置) ,size (spring security 过滤器链中过滤器的个数) 。 当currentPosition == size 的时候也就意味着 spring security 的过滤器链条执行完了,于是就该使用原始的originalChain 继续去调用 ser

13 data breach predictions for 2019

$
0
0

Data breaches are inevitable at any organization. But what form will those breaches take? How will the attackers gain access? What will they steal or damage? What motivates them to attempt the attacks? CSO has gathered predictions from industry experts about where, how and why cyber criminals will attempt to break into networks and steal data during the coming year.

1. Biometric hacking will rise

The growing popularity of biometric authentication will make it a target for hackers. We will likely see breaches that expose vulnerabilities in touch ID sensors, facial recognition and passcodes, according to the Experian Data Breach Industry Forecast . “Expect hackers to take advantage not only of the flaws found in biometric authentication hardware and devices, but also of the collection and storage of data. It is only a matter of time until a large-scale attack involves biometrics either by hacking into a biometric system to gain access or by spoofing biometric data. Healthcare, government, and financial industries are most at risk,” said the report’s authors.

2. A cyber attack on a car will kill someone

The ability to hack and take control over a connected vehiclehas been proven. Such a hack can not only turn off the car’s engine but disable safety features like antilock brakes or the airbags. “As cars become more connected and driverless cars evolve, hackers will have more opportunities of doing real harm,” says James Carder, CISO at LogRhythm Labs.

3. Attackers will hold the internet hostage

Someone―likely a hacktivist group or nation-state will take distributed denial of service DDoS toa whole new level in 2019 and attempt to take down a large part of the internet in an extortionattempt. ADDoS attack in 2016 against DNS hosting provider Dyn took down many popularwebsites including Twitter, Reddit and Amazon.com. Security expert Bruce Schneier noted that attackers were probing other critical internet services for potential weaknesses.

“A DDoS attack of this magnitude against a major registrar like Verisign could take down an entire top-level domains (TLD) worth of websites,” WatchGuard’s Threat Lab team wrote in a blog post . “Even the protocol that drives the internet itself, Border Gateway Protocol (BGP), operates largely on the honor system. Only 10 percent of the internet addresses have valid resource public key infrastructure (RPKI) records to protect against route hijacking. Even worse, only 0.1 percent of the internet’s autonomous systems … have enabled route origin validation, meaning the other 99.9 percent are wide open for hostile takeover from route hijacking. The bottom line, the internet itself is ripe for the taking by someone with the resources to DDoS multiple critical points on the internet or abuse the underlying protocols themselves.”

Review: Continuous cybersecurity monitoring with CyCognito

$
0
0

Back in the early days of networking, a lot of effort went into hiringpenetration testers who would come in and try to break security. They would then report on their findings, and, presumably, whatever flaws or vulnerabilities they discovered would get fixed before real attackers could come calling. Everybody did this, even the military, which dubbed its penetration testers “red teams.” An experienced red team could find all kinds of previously unknown threats.

These human-centered penetration testing operations became less useful as networks began to grow. Today, even with something like a two-week engagement, most penetration testing teams can only get to a very small percentage, often less than one percent, of a total network. What good is a report about one or two application servers if there are hundreds or thousands of them deployed worldwide? It’s gotten even worse with the move to cloud, virtualization and software-defined networking. Assets might appear and disappear within the space of a few hours, and virtual servers are often abandoned and forgotten about. Most internal information technology teams don’t even know about all of their assets, so external penetration testers who visit maybe once a year certainly have no clue.

The industry has responded with things like vulnerability scanners to try and automate what penetration testing teams used to do. But they are limited by their programming and can only scan assets that are known to IT teams or that fall within a range of IP addresses. That’s not how hackers operate, of course. They are more than happy to compromise an unknown asset, a server outside of a defined IP range, a cloud asset or even a connected server sitting outside of an organization’s direct control downstream in the supply chain.

The CyCognito platform was designed to provide the kinds of advantages that old school penetration testing used to, but on a continuous basis and for modern, global enterprise networks comprised of mixed physical and virtual assets. It basically studies networks the same way that hackers do, from the outside with no help or internal bias inserted into the process.

How CyCognito works

Unlike most other reviews CSO has done, for the CyCognito platform there was no setup required. Nothing needs to be installed on the host network and there don’t need to be any assets on the inside either. The people who designed the CyCognito platform believe that even a simple act like defining IP addresses inserts bias into the testing. And because hackers aren’t given any parameters to work from, an attack surface monitoring tool shouldn’t either.

CyCognito maintains a network of over 60,000 bots scattered around the internet. The bots are constantly looking for assets connected to the internet and cataloging their findings, sort of like how Google looks for new webpages. Currently, there are about 3.5 billion assets that have been discovered by CyCognito, and that number is always growing. It’s pretty much the entire internet.

Once CyCognito is contracted to perform continuous attack surface monitoring of a company’s assets, the platform gets to work, collecting what it already knows and adding to that information. Pricing for the service is tiered and based on the number of assets in the organization’s attack surface, with yearly subscription models for the continuous monitoring.

Moto G, G4 Plus, G5, G5s, X4, Z3, Z2 Play and E5 Play Get December Security Patc ...

$
0
0

Motorola has released the monthly software updates for many of its smartphones. As you would expect, these updates cover three key areas:

(1) performance improvements

(2) resolving identified bugs or defects, and

(3) the security updates for the operating system, in this case, Android.

The phones to have received the updates in the last week include the Moto X4 , Moto Z3 , Moto G5 , Moto G5s and the Moto G4 Plus .

Apart from this, Motorola has also updated the following devices in the first half of December: Moto Z2 Play , Moto G and Moto e5 Play .

The maintenance release notes of all these phones from the Support page of Motorola.com website suggests that these devices have all got the December security patch updates which Motorola releases every month. This would clear any minor issues in the functioning of the device if any.

General Instructions and Information

As always, Motorola has issued some standard directions to all users of its smartphones on how to handle the updates. These are over-the-air (OTA) updates and with the cooperation of the service providers, the updates reach your phone and notification pops up on your phone’s display screen. It will say a new software update is available and would you like it to be installed. Just make sure your phone is connected to Wi-Fi and the battery is showing 50% power or more. Tap ‘yes’ and the download and installation will happen automatically. Once it is done, restart your device.

In case you know the updates are available but for some reason the notification did not show up, you can manually do this through the Settings >> System >> System Updates >> Download and Install path. You will have restart as above to end the process and enjoy the use of the Moto device.

6 Network Security Challenges in the Year Ahead

$
0
0

The network security threat landscape in 2019 is expected to look much like it did in 2018. Here’s a look at six network security challenges for 2019 for businesses and individual users to keep in mind.

In many ways, the network security threat landscape in 2019 will look much like it did in 2018. From viruses to DDoS attacks, even when threats aren’t multiplying in number year over year, they’re managing to become more sophisticated and damaging. Here’s a look at six network security challenges for 2019 for businesses and individual users to keep in mind.

1. A Greater Amount of Sensitive Traffic Than Ever

In a 2018 survey, PwC reported that mobile channels were the only segment that saw growth that year among banking customers. In other words, demand for mobile-friendly banking tools is higher than ever. That means a lot of very sensitive data flowing over public and private networks.

In 2018, security experts from Kaspersky discovered what appeared to be a years-long router-hacking campaign performed by as-yet-unknown cyber-assailants. Researchers discovered digital fingerprints all over the world indicating that routers in public places had been subtly hacked to allow kernel-level access for any device connected to it.

Kernel-level access is the deepest access possible, indicating that the data being sought here was highly personal ― including, potentially, banking transactions and communication records.

2. Worms and Viruses

Viruses and worms are some of the most well-known network security challenges. In 2015, Symantec estimated that as many as one million new malware threats are released into the wild every day or a total of 217 million in a calendar year.

In 2017, AV-Test released research indicating that the number of new malware threats had declined for the first time ever, down to 127 million over the year.

Viruses can lay dormant until the user performs an action that triggers it, meaning there’s not always an indication that something’s even amiss. Worms infect specific files, such as documents, and self-replicates itself once it’s inside a target system.

For individual internet users, network architects and IT specialists, anti-virus and anti-malware programs are still necessary for keeping this class of threats at bay. For IT departments especially, high-profile computer bugs are a reminder that a vast majority of attacks target unpatched software and out-of-date hardware. The number of new threats might be gradually declining, but the severity of these threats hasn’t abated.

3. Compelling Students to Enter the STEM Fields

Let’s switch focus for a moment and look at the next generation of people who will detect, fix and communicate about modern threats on the digital seas. All of the STEM fields are vital to national competitiveness but, of the top college majors ranked by a number of job prospects, computer science takes first place.

According to the National Bureau of Economic Research, skills obtained in the fields of math, science and technology are increasingly transferable to, and relevant in, a wide variety of industries and potential career paths. Part of the reason is the ubiquity of technology and the rate of data exchange across the world, which powers commerce, finance, and most other human endeavors.

Unfortunately, the NBER has also indicated that the U.S. requires many more STEM students than it currently has, in order to compete in a digital and globalized world.

The number and types of cyber threats are a huge part of the reason why, with world powers and unknown parties engaging in cyber-espionage and attempted hacking at regular intervals, against both private and public infrastructure. Making a stronger push to get kids interested in these fields will also help address unemployment and opportunity gaps in struggling communities.

4. DDoS Attacks

For companies whose business model revolves around selling digital services, or selling anything else online for that matter, DDoS attacks can be crippling, not to mention ruinously expensive due to lost revenue.

DDoS attacks have made a lot of news recently thanks to WannaCry and others, but the motivation behind them seems to be shifting. Perpetrators today are less concerned with crippling a target’s infrastructure and more interested, potentially, in using DDoS attacks as a distraction while they carry out more sophisticated penetration attempts without interference.

Either way, using the Internet of Things to overwhelm an organization’s digital infrastructure is a type of network security threat became more common in 2017 than in 2016 ― up 24 percent ― with no obvious signs of relenting. Early detection is the best weapon, as are Web Application Firewalls. Both solutions require either an attentive in-house IT team or effective collaboration with your service provider.

5. Cryptojacking

Cryptocurrencies are either worthless or about to take off in a big way. But despite the uncertainty over its future, the limited applications, and the slow adoption rate, “crypto-jacking” is becoming a favorite pastime of hackers.

Cryptojacking occurs when a malicious app or script on a user’s digital device mines cryptocurrency in the background without the user’s knowledge or permission. “Mining” cryptocurrency requires a fair amount of hardware power and other resources, meaning users who’ve been cryptojacked will find that their programs and devices don’t work as expected.

Worse, the sheer variety of techniques used to introduce cryptojacking scripts into counterfeit and even legitimate web and mobile applications is positively dizzying. And since they come in all shapes and forms, cryptojacking attacks could well have other underhanded intentions beyond mining cryptocurrencies, including accessing forbidden parts of the code or sensitive user information.

6. Bring Your Own Device

Let’s close with a few words of advice about BYOD ― bring your own device ― policies in the workplace. There are clear benefits to allowing employees to use their favorite devices at work, including higher productivity and morale. But doing so also introduces a panoply of potential security threats.

IT departments already struggle sometimes with keeping computers and devices patched and updated, and the public struggles even more. Thanks to the fragmented nature of the Android operating system, for instance, “most” Android phones and tablets in operation today are not running the latest security fixes, according to security vendor Skycure.

Your employees and your business have a lot to gain from implementing BYOD. But doing so requires a comprehensive set of rules for employees to abide by, including turning on auto-updates for OS patches, completing training on how to respond to phishing attempts and other cybersecurity threats, and delivering regular reminders about good password hygiene.

No network security threat is insurmountable, but most of them do require vigilance ― and in most cases, a great IT team or a security-minded vendor.

Source:https://www.readitquik.com/articles/security-2/6-network-security-challenges-in-the-year-ahead/

Engadget giveaway: Win a security package courtesy of Bitdefender!

$
0
0

Deliveries left out on your doorstep, nocturnal pet activities and network-connected devices open wide to the world for possible hacking or botnet conscription are just some of the reasons Bitdefender has provided this week's giveaway package. The company's Box 2 is at the center, essentially a dual-band router with a specialty in cybersecurity protection . The device can monitor your network traffic for dubious activity, provide a VPN for privacy and protect your devices from malware. Bitdefender's subscription service also provides Parental Control to safeguard children against cyberbullying and online predators.

There are a host of ways this service can be a useful line of defense in an increasingly connected age, and once set up, you can worry less about them. All you need to do is head down to the Rafflecopter widget below for up to five chances at winning this package, which includes a Bitdefender Box 2 (with 1-year subscription), Blink 5-camera monitoring system and Ring Video Doorbell 2. Good luck!

Ixia, a Keysight Business, Achieves FIPS 140-2 Validation for Network Packet Bro ...

$
0
0

SANTA ROSA, Calif. (BUSINESS WIRE) lt;a href=”https://twitter.com/hashtag/FedIT?src=hash” target=”_blank”gt;#FedITlt;/agt; Keysight Technologies, Inc. (NYSE: KEYS), a leading technology company

that helps enterprises, service providers, and governments accelerate

innovation to connect and secure the world, today announcedthat Ixia, a

Keysight Business, has achieved

Federal

for its

Vision portfolio of network packet brokers (NPBs). This ensures that all

cryptographic keys and algorithms conform to strict National Institute

of Standards and Technology (NIST) and Canadian Centre for Cyber

Security (CCCS) guidelines.


Ixia, a Keysight Business, Achieves FIPS 140-2 Validation for Network Packet Bro ...
The

Cryptographic

(CMVP) is a joint effort between NIST in

the United States and CCCS, a branch of the Communications Security

Establishment(CSE). The CMVP validates cryptographic modules to

Federal

Information Processing Standards (FIPS) 140-2,Security Requirements for

, and other FIPS cryptography-based

standards.Federal Agencies in the United States and Canada may acquire

active FIPS 140-2 cryptographic modules listed in the

CMVP

for the protection of sensitive

information.

“Ixia is committed to helping federal agencies and other organizations

that require FIPS 140-2 validated cryptography protect their networks

and data,” said Recep Ozdag, vice president, product management for

Keysight’s Ixia Solutions Group. “Our investments in

thesecertifications assures government agencies, military, and other

security-conscious organizations that our visibility solutions meet the

highest standards of security integrity.”

All of Ixia’s

Network

are now FIPS 140-2 validated in addition to earlier certifications for Vision ONE and Vision 7300 for

Common

and

DoD

(UC APL). The following

FIPS 140-2 certificates were issued by the CMVP:

FIPS 140-2 Cert. # 3311
Ixia Cryptographic Module for Network Visibility FIPS 140-2 Cert. # 3313
Ixia Cryptographic Module for OpenSSL

The certificatesare valid on all Ixia Vision NPBs including Vision ONE,

Vision 7300, Vision Edge 10S, Vision Edge 100, Vision Edge 40 and

TradeVision, Ixia’s market data monitoring platform, as well as the

Vision Edge OS. The certificates will also be valid on future Vision

NPBs.

Ixia’s NPBs allow businesses to see inside their networks and data

centers. Ixia’s NPBs deliver intelligent, sophisticated and programmable

network flow optimization providing visibility and security coverage to

business assets. This helps IT teams quickly resolve performance

bottlenecks, troubleshoot problems, improve data center automation,

optimize expensive network analysis and security tools and enhance

business execution.

About Keysight Technologies

Keysight Technologies, Inc. (NYSE: KEYS) is a leading technology company

that helps enterprises, service providers, and governments accelerate

innovation to connect and secure the world. Keysight’s solutions

optimize networks and bring electronic products to market faster and at

a lower cost with offerings from design simulation, to prototype

validation, to manufacturing test, to optimization in networks and cloud

environments. Customers span the worldwide communications ecosystem,

aerospace and defense, automotive, energy, semiconductor and general

electronics end markets. Keysight generated revenues of $3.9B in fiscal

year 2018. More information is available at www.keysight.com .

Additional information about Keysight Technologies is available in the

newsroom at https://www.keysight.com/go/news

ClearDATA Appoints New C-Suite Leadership to Scale Healthcare Cloud and Security ...

$
0
0
New Chief Marketing Officer and Chief Revenue Officer Join ClearDATA
to Expand Enterprise Reach

AUSTIN, Texas (BUSINESS WIRE) lt;a href=”https://twitter.com/hashtag/healthcarecloud?src=hash” target=”_blank”gt;#healthcarecloudlt;/agt; ClearDATA ,

a leading healthcare cloud, security and compliance expert, has named

Michael Donohue as Chief Marketing Officer and Dean Fredenburgh as Chief

Revenue Officer to scale its market leadership position to the next

level and better address the large and enterprise market segments.

Following a recent funding round of $26 million, ClearDATA continues to

be among the elite, high growth, tech organizations, clearing 99 percent

year-over-year growth last year. ClearDATA continues to build a team of

experts, thought leaders and innovators who are eager to improve

healthcare by delivering the agility and power of cloud services while

maintaining strict privacy, security and compliance. The addition of the

new roles illustrates ClearDATA’s momentum in healthcare innovation and

driving high-impact business outcomes for its customers. ClearDATA

exclusively serves healthcare organizations including payers, providers,

life sciences, global systems integrators and health technology

solutions companies.


ClearDATA Appoints New C-Suite Leadership to Scale Healthcare Cloud and Security ...

“Dean and Michael will play a pivotal role as we increase our presence

in the enterprise market where we are working to modernize and protect

health IT environments,” said Darin Brannan, Chief Executive Officer at

ClearDATA. “Their combined experience in healthcare, cloud innovation

and emerging technologies will have an immediate positive impact on both

our customers and our future customers. We are thrilled to welcome them

to the ClearDATA family.”

Michael Donohue is a senior executive with over 25 years of experience

driving marketing strategy, revenue growth, customer acquisition and

engagement, and brand equity for some of the world’s most recognized and

prominent brands including MedAsset/Vizient, Allscripts, Alere, J&J,

P&G, Dun & Bradstreet and SAP. Michael has spent most of his executive

leadership career in healthcare, innovating and marketing advanced

technologies and services to both high growth VC-backed startups and

well-established market leaders in the provider, payer, life sciences,

consumer and healthcare information technology (HCIT) markets. He was

recently the CEO of Axial Exchange, a VC-backed high growth startup

providing a secure, private, HIPAA compliant mobile platform for both

payers and providers. Michael was previously the Chief Marketing Officer

for MedAssets, a $750M healthcare technology and services organization,

where he led the overall go-to-market and customer growth strategies,

positioning, segmentation, cross sell/upsell, demand gen and strategic

partnerships resulting in a successful $2.3B exit. Prior to MedAssets,

he was VP of Ambulatory Sales and Channel Strategy at Allscripts and VP

Solutions Management. Michael was instrumental in successfully launching

and growing Allscripts first Software as a service (SaaS) electronic

health record (EHR) to a $100M business. Michael was also Chief

Marketing Officer at Alere and held marketing leadership roles at D&B

and SAP. He began his career as an advertising executive in New York,

working for major advertising agencies including McCann Erickson one

of the world’s leading full-service ad agencies.

“Healthcare has begun to adopt advanced technologies, but still is

behind other industries,” said Michael. “Only through new thinking,

continuous innovation and adopting the cloud can healthcare

organizations scale to offer the value-based care patients seek and

deserve. I plan to help ClearDATA expand in the enterprise market to

ultimately provide payers, providers, pharma and life sciences the

ability to truly innovate healthcare in a secure and compliant

environment. It’s a promising time for our industry and no one is better

suited than ClearDATA to help these organizations meet their business

objectives while improving patient outcomes.”

The new Chief Revenue Officer, Dean Fredenburgh, will focus on expanding

services throughout ClearDATA’s existing blue-chip customer base and

capture broader adoption of secure and compliant cloud solutions and

services across the major healthcare segments. Dean is a proven

healthcare executive sales leader with technical solution sales and

management experience selling to the mid and enterprise healthcare

market for more than 20 years. Dean has tremendous passion and a long

successful history selling solutions to the highly-regulated industries

of healthcare, life sciences, payer and finance and is particularly

adept at public cloud, analytics, AI and machine learning technologies

and related Professional Services.

Prior to joining ClearDATA, Dean led the Healthcare and Life Sciences

sales group at Amazon Web Services (AWS) where he was responsible for

building the global enterprise sales team.In the process, he often

worked closely with ClearDATA’s senior executive team where he

recognized the substantive value in the combined ClearDATA/AWS solution.

Dean’s sales and executive experience includes a successful track record

of recently leading a 450-person division at Terdata; building high

growth sales teams, new business units, developing leaders, optimizing

territory and organization structures, devising go-to-market tactics and

strategies and building significant market share. Dean also has diverse

experience across the spectrum of technology company lifecycles, with

success in high-growth VC-backed startups through mid and large

enterprise organizations.He is known for both his strategic leadership

and tactical execution at driving transformational IT and sales outcomes

by building highly effective teams that

deliverdifferentiatedperformance successfully scaling to market

leadership positions.

“ClearDATA presents an opportunity for me to apply the knowledge and

experience gained over 20 years to accelerate ClearDATA’s growth and

ability to carry out our mission to make healthcare better every day,”

said Dean. “Healthcare is in the midst of significant change and

transformation. Enabling a secure and compliant environment for

organizations to more quickly adopt the public cloud is a critical part

of this industry’s transformation, driving faster innovation and time to

market.”

“I’m focused on building a world-class team that enables our customers

to better serve their patients, customers and stakeholders. We are

scaling the team to meaningfully drive the healthcare providers, payers,

life sciences and product organizations to better modernize and protect

their IT environments.We will build upon ClearDATA’s considerable

talent and market leadership to focus on customer outcomes and reduce

friction associated with public cloud adoption,” continued Dean.

To learn mor

How do I log out of all sessions created after changing the user's password&am ...

$
0
0

I store my session id Redis, my session id is global unique, every time login will generate a new session id, so even same user will still have different session id. So I have no way to destroy the user's session because I have no way to locate it. So how can I design, so as to satisfy my need?

Sessions are a security issue. I suggest using a library/framework to do it. If you hand roll your own solution and you do it wrong, you open your users up to session hijacking , prediction , and/or fixation . Those are bad.

If you're using a session library and it's impossible to do what you want without seriously monkeying around, there's probably a reason.

If you really want to roll your own rather than using a session library, start here .

360浏览器推出自有根证书计划 吁加快证书安全技术改造

$
0
0

中新网12月18日电 在12月17-18日召开的2018网络空间可信峰会上,360 PC浏览器事业部总经理梁志辉公布360浏览器将创建自有根证书计划,全面提升用户上网的安全性。这是距谷歌宣布推出自有CA根证书后,国内首家创建自有根证书的浏览器厂商。


360浏览器推出自有根证书计划 吁加快证书安全技术改造

360 PC浏览器事业部总经理梁志辉发表演讲

梁志辉表示,360浏览器今年正式将证书安全纳入浏览器的防护体系。目前,360浏览器已将不加密的http标记为“不安全”。从今年底开始,360浏览器将通过红色锁头,标记http网站为“不安全”网站,2019年会将所有http开头的网址标记为“不安全”,如果用户登录带密码表单的http网页,浏览器还会使用弹出式提醒。同时,360浏览器支持国密算法,支持国密双向证书校验,希望保障我国自主密码算法的应用推广和平稳过渡。


360浏览器推出自有根证书计划 吁加快证书安全技术改造

360浏览器通过红色锁头和弹出式提醒标记当前http网站不安全

而在CA监管方面,360浏览器的根证书计划默认信任操作系统已信任的根证书,同时也会配置自己的根信任库作为系统根信任库的补充。360浏览器为使用web服务器的终端用户证书用于SSL/TLS认证公布了认证策略,360官方负责人将维护这一策略并评估来自CA的新请求,对于不符合策略的CA机构,360有权移除任何证书,甚至包括操作系统信任的根证书。

黑客攻击手段多元化及与之对应的安全措施与加密算法的过时、未全站部署SSL证书、不受监管的CA机构,种种因素严重影响个人用户和商用用户的网络使用安全性。虽然此次360浏览器宣布了根证书计划,梁志辉认为这需要整个行业的更多重视与合作,在网络空间可信峰会上,他呼吁网站开发者及行业给予支持及投入,共同推动CA认证的技术改造。

此外,CABO论坛(即电子认证机构-浏览器-操作系统论坛)于会上正式启动。该论坛为一个非盈利讨论组,将推进CA根证书在操作系统的预置与应用,协调浏览器企业统一安全传输层协议(TLS)使用细节。其成员包含第三方CA机构,浏览器厂商,操作系统开发企业以及关注根证书预置事项的机构。360浏览器已宣布加入。CABO参照国际CAB论坛(即CA浏览器论坛)而成立,旨在推动我国电子认证技术安全应用的发展,同时争取国际话语权。


360浏览器推出自有根证书计划 吁加快证书安全技术改造

360 PC浏览器事业部总经理梁志辉(右三)出席CABO启动仪式

网络劫持现象高发 SSL证书亟需更科学的管理机制

随着黑客攻击手段的层出不穷,网络劫持现象愈演愈烈,且手段日益升级。十年前,恶意软件只会用最简单粗暴的方式修改浏览器首页用于牟利;而现在,黑客静悄悄躲在网络背后,利用更加高明的手段使人难以察觉安全威胁,例如通过http或dns网络劫持进行中间人攻击,在网站挂马或者挂弹窗广告;利用浏览器和flash的0 day漏洞,加载含有越权漏洞的代码来控制计算机系统;甚至通过网页脚本,用访问恶意网页的计算机进行挖矿等。

与此同时,目前国内仍有多数网站开发者对此重视不够,并缺乏相对应的安全措施。比如有大量网站未支持SSL证书,更有不少网站仅支持http访问,使用明文网络协议传输敏感信息。在黑客面前,用户传输的明文数据没有任何安全机制,如同裸奔,因而在传输过程中极易被劫持导致账户丢失。

此外,糟糕的加密算法和使用过时的浏览器内核也让普通网民上网时危机四伏。尽管所有安全措施都实施了,但是漏洞有可能会由底层密码算法套件引入。而使用并未及时更新内核的浏览器,也使用户在上网过程中遭遇高危漏洞的概率大为上升。


360浏览器推出自有根证书计划 吁加快证书安全技术改造

国内主要浏览器内核对比

然而,使用SSL证书就足够安全了吗?未必。近年来全球范围内屡次爆出赛门铁克等CA机构未经授权错误签发大量SSL证书的事件,也让传统老牌CA机构的权威性和安全性频频遭遇信任危机。

目前https的身份校验体系基于公钥基础设施(PKI)体系,在这个基础上CA机构的角色被假设为可信且安全的。然而近年来CA机构事故频发:2013年斯诺登泄漏的文件指出,美国NSA利用一些CA颁发的伪造证书截取并破解大量加密https流量;2017年发生的赛门铁克证书门,Google Chrome发现赛门铁克错误签发3万张https证书,最终导致国际五大浏览器厂商对其同时发布不信任计划。如今各个CA机构新增和吊销的证书已呈现一定数量级,证书滥发、错发、无意信任等情况时有发生,证书可信性、真实性无法得到及时有效的检验。为此,CA机构已经实现了一些更好的管理方法,但有时候很难依赖它们,证书管理亟需更科学的管理机制。

在此环境下,为进一步提升用户使用安全性,360正式把证书安全纳入安全浏览器的防护。其实早先国外浏览器厂商已有类似动作。去年,Google正式宣布推出自有 CA 根证书,摆脱对由第三方签发的中级证书颁发机构的依赖。而在国内,360浏览器是第一家推出根证书计划的浏览器厂商。梁志辉表示,360自有根证书计划通过提高问题处理的效率,缩短风险周期,可以有效识别出具体CA机构签发的网站证书的真实性,帮助用户快速识别可信安全证书。同时,根证书计划的实施,还将确保360浏览器地址栏所出现的https能够代表真正安全可信的网页,进一步保证用户上网安全。

据了解,360浏览器根证书认证过程,包括CA申请、信息验证、批准请求、预置测试、正式信任五个部分。为完成根证书预置,CA机构必须遵守360浏览器根证书认证策略的规定,并提供所有需要的材料,360浏览器官方将会对这些材料进行审核。

安全大脑赋能 360浏览器将更加安全、智能、可信

从2007年开始发布第一款产品至今,360浏览器已走过11个年头。伴随11年技术沉淀,360浏览器一直跟各种恶意网站和黑产进行斗争。这也是继承了360的安全基因。360集团是中国最大的互联网安全企业。目前,360汇聚了国内规模领先的顶级安全技术团队,积累了超万件原创技术和核心技术专利。进入大安全时代,面对新威胁与大挑战,360于今年5月发布了全球最大的智能安全防御体系――360安全大脑1.0版,融合大数据、云计算、人工智能、IoT、移动通信、区块链等新技术,构建了大安全时代的整体防御战略体系,应对万物互联时代带来的全新的安全威胁与挑战。梁志辉表示,在安全大脑赋能下,未来的浏览器将更安全、智能、可信。

在安全保障上,360浏览器在内核的更新上一直与国际保持同步。目前,国内主要浏览器使用内核仍然停留在一年前版本。这意味着一年前可能已经被黑客武器化的提权漏洞可以被轻易利用。360所开发的浏览器会按月修补已知高危漏洞,确保公开的漏洞在30天之内被修补,加上独有的15层安全防御体系,可通过主动防御驱动、浏览器沙箱、网址云安全等技术应对木马威胁。

在网络信息安全技术上,360浏览器在国内也是首屈一指。早在2015年,360安全浏览器在国内率先推出支持国密算法的浏览器产品;从2018年开始,360浏览器宣布全系产品都将实现国产密码算法和安全协议的支持,有效弥补了原有密码应用体系中薄弱的一环。未来用户无需下载安装专用的客户端软件,使用360浏览器即可访问各个支持国产密码算法、具备更高加密安全强度的网银、支付等应用。这也是国家自主密码算法应用推广的重大突破,对提升我国网络安全环境、加快推进国产密码算法在金融领域的应用和推广,打破国外技术控制,有效规避金融交易风险、保障国家金融体系安全等多个方面都有着深远的意义。 返回搜狐,查看更多

责任编辑:

Untangle and Malwarebytes Partner to Offer a Simplified Approach to Layered Secu ...

$
0
0

Leaders in Network Security and Endpoint Protection and Remediation Join Forces to Provide Unprecedented Security, Visibility and Control to SMB IT

SAN JOSE, Calif. December 18, 2018 Untangle Inc. , Untangle Inc., a leader in comprehensive network security for small-to-medium business, and Malwarebytes, the leading advanced endpoint protection and remediation solution, today announced a new agreement to integrate Malwarebytes’ Endpoint Protection and Untangle’s cloud security platform, Command Center, to provide administrators with a single pane of glass to manage security orchestration across the network and connected devices, ensuring consistent, comprehensive security protection end-to-end.

“It can be overwhelming for companies to evaluate, deploy and manage disparate security solutions today,” said Michael Osterman, principal analyst with Osterman Research. “Untangle’s seamless integration with Malwarebytes is particularly compelling as it provides enhanced visibility and streamlined Command Center operations to reduce the headache of managing security operations for SMBs. This gives a particularly powerful end-to-end solution for the products when paired.”

“Channel partners are essential in getting proper security solutions into the hands of SMBs, so it’s crucial that we understand their customers and barriers to adoption,” said Scott Devens, chief executive officer at Untangle. “As we expected, cost and a lack of manpower are key pain points when it comes to security for both SMBs and channel partners. However, we were surprised to learn that more and more customers are becoming savvy to phishing attacks, with 43 percent of channel customers reporting attacks before a breach occurred.”

Untangle Command Center coupled with Malwarebytes Endpoint Protection offers administrators:

Greater visibility into the profile of hosts on the network including operating systems, installed software and security status. Status of the last Malwarebytes scan, including time, duration, threats discovered, quarantined endpoints and any remediation. Ability to initiate a Malwarebytes scan on any host, plus easy navigation between Untangle Command Center and Malwarebytes Management portal. A single pane of glass for understanding the security status of the network and connected hosts, including identified threats and remediation.

“SMBs have limited resources and need an integrated security solution that is centralized and takes the guess work out of network security,” said Raj Mallempati, Senior Vice President of Marketing, Malwarebytes. “By integrating Malwarebytes’ Endpoint Protection with Untangle’s Command Center, we are able to give network administrators at small and medium-sized businesses control and visibility into their environment. This integration is a security win as it gives customers an easy-to-use, centralized platform to ensure the safety of their networks and connected devices.”

Untangle Command Center provides cloud-based, centralized management for Untangle NG Firewall deployments. With the integration of Malwarebytes Endpoint Protection, Command Center gives administrators full visibility, protection and control over the network and connected devices.

“Our partnership agreement with Malwarebytes begins an evolution of Command Center towards a full network security orchestration platform,” said Scott Devens, chief executive officer at Untangle. “This integrated solution makes an enterprise-grade, layered approach to security possible for small and medium-sized organizations by providing a simple, seamless approach to threat detection and remediation.”

“As a distributor of both Malwarebytes and Untangle, we believe that this partnership will provide immediate value to our MSPs by helping them understand the security posture of their networks and connected devices at a glance, filling a need for simplified and streamlined security orchestration for their SMB clients,” said Jay Bradley, founder and managing director of Prodata.

Untangle Command Center with Malwarebytes is available to customers today at untangle.com . Command Center centralized management is included for Untangle NG Firewall Complete subscribers at no extra cost.

About Untangle

Untangle is an innovator in cybersecurity designed specifically for the below-enterprise market, safeguarding businesses, home offices, nonprofits, schools and governmental organizations. Untangle’s integrated suite of software and appliances provides enterprise-grade capabilities and consumer-oriented simplicity to organizations with limited IT resources. Untangle’s award-winning network security solutions are trusted by over 40,000 customers around the world. Untangle is headquartered in San Jose, California. For more information, www.untangle.com .

About Malwarebytes

Malwarebytes proactively protects people and businesses against dangerous threats such as malware, ransomware and exploits that escape detection by traditional antivirus solutions. Malwarebytes completely replaces antivirus with artificial intelligence-powered technology that stops cyberattacks before they can compromise home computers and business endpoints. More than 60,000 businesses and millions of people worldwide trust and recommend Malwarebytes solutions. Our team of threat researchers and security experts process emerging and established threats every day, from all over the globe. Founded in 2008, the company is headquartered in California, with offices in Europe and Asia. For more information, please visit us at http://www.malwarebytes.com/ .

Malwarebytes founder and CEO Marcin Kleczynski started the company to create the best disinfection and protection solutions to combat the world’s most harmful Internet threats. Marcin was recently named “CEO of the Year” in the Global Excellence awards and has been named to the Forbes 30 Under 30 Rising Stars of Enterprise Technology list and the Silicon Valley Business Journal’s 40 Under 40 award, adding those to an Ernst & Young Entrepreneur of the Year Award.

Risk Quantification Decoded

$
0
0

For security teams, the idea of risk is nothing new in fact, most security teams work with risk every day. However, the concept of distilling that risk down into numbers, risk quantification, is a hotly debated issue among information and security professionals. In 2018, in their inaugural Integrated Risk Management Magic Quadrant, Gartner listed risk quantification as a critical capability for integrated risk management solutions. Yet, the way security teams approach risk quantification widely varies from organization to organization. Here we’ll explore why risk quantification is still so ambiguous for many security teams and why it is critical that the industry embrace this as the next step for future success.

A brief history of risk and risk quantification

The modern concept of risk is directly correlated with uncertainty, and uncertainty is correlated with the availability of information. If an individual makes a decision with 100% certainty (or all possible information), there is no risk. Notice there is a difference between possible and available information. While individuals work to assemble all available information, it is almost impossible to assemble all possible information prior to a decision deadline. If we had to know all the possible information to make a decision, we would not be able to get our morning coffee let alone lead a team.

Risk has been an integral part of the business since the modern concept evolved. From contracts in the 16th century to the emergence of lending, business leaders have been taking risks seemingly forever. Until the 17th and 18th centuries, though, the decision to accept or reject that risk was predicated on subjective measures such as personal relationships and word of mouth.

The industry that catalyzed the development of objective risk quantification was, to no surprise, insurance. Critical to their business model, insurance companies innovated new ways to calculate the risks associated with individuals and material objects. In the 20th century, we saw governments begin to call for increased use of risk quantification driven by increasing tensions following nuclearization and the Cold War, the US government needed the means to make calculated decisions moving forward.

Business risk in the modern age

Business is inherently risky as it is predicated on the fact that businesses that survive are doing something different from their competitors. If someone is doing something never done before, they are taking a risk. Looking at the Ansoff Matrix for new product development, we see that teams of any function must embrace some form of risk.

Risk for information and security professionals

We’ve seen before that risk reduction, the primary objective or security teams, is often at odds with business growth. In fact, Bromium reports that 74% of CISO’s see security as the primary hindrance to business growth and innovation . Both of these concepts take risk.

It is not the job of the security team to stand in the way of the rest of the organization and be at odds with the CEO. In fact, these businesses are the ones that stagnate. It is also not the job of the CEO to turn a blind eye to the security risks inherent to business growth.

Both the CEO and security leaders need to be effective at relaying the necessary information to each other: the CEO must effectively convey their ideas and strategy, and the security leader must be able to effectively convey the risks associated with that strategy for the CEO to make a well-informed decision about whether to move forward.

The issue is, that without an objective means to convey the risks associated with the CEO’s strategy, the CISO cannot hold up their end of the relationship.

Barriers to adoption of risk quantification

If risk quantification is so critical to a CISO, why is it so widely debated? The fact is, information security has not been so critical to a company’s bottom line before. Information is the new currency and customers’ trust in an organization’s security of their customers’ information has a direct impact on the bottom line.

We are in uncharted waters in terms of how to but objective numbers around the activities that were previously focused on ensuring that the rest of the organization continued to function.


Risk Quantification Decoded

The MIT CISR breaks the risks managed by information professionals into four categories: agility, accuracy, access, and availability.

Up until the digital revolution, the primary focus of security teams was mostly availability, some access, and pieces of accuracy and agility.

With digitization that has completely shifted. In fact, the role of the CISO now is more focused on agility securing the organization as it rapidly adopts new technologies that are not necessarily secure. This shift has caused the shift in dynamic and the need for risk quantification. Unfortunately for those working to define it, the easiest function to define is availability in the case of business continuity, we can look at what happens in the event of a disaster, how long do processes stop, and what revenue is lost as a result of that breakdown.

However, what happens in the event of a data breach? No servers go down, business is not interrupted, yet stocks tank and bottom lines are slashed. This is the power of reputational risk and why risk quantification in the digital age is so difficult. It has fallen on the information security company to define the risks associated with a company when customers lose faith in a company’s ability to protect their information.

Risk quantification for information security

While the need for concrete risk quantification has emerged, the landscape of frameworks to quantify risk is still fragmented. We’ll take a look at the most popular frameworks to date for risk quantification:

NIST SP 800-30: Originally published in 2002 and updated in 2012, NIST Special Publication 800-30 or NIST Risk Management Framework is built alongside the gold-standard NIST Cybersecurity Framework as a means to view an organization’s security threats through a risk-based lens. The limitations of the NIST RMF is the revision process the revised version published in 2012 is designed for a risk assessment . While that lends itself to risk quantification, it does not directly determine the probability of risks in a fully objective manner.

FAIR Model : Factor Analysis of Information Risk (FAIR) Model is touted as “the only international standard quantitative model for cybersecurity and operational risk”. To date, the FAIR Model has been widely debated in the security community for its approach and ability to quantify risk. Recently, the FAIR Model has moved from obscurity to prominence for those reasons.

World Economic Forum Cyber Risk Framework and Maturity Model : Originally published in 2015, the WEF framework bears similarities to the NIST RMF in its subjectivity. Where the FAIR model is more data drive, the WEF framework relies on human decisions to determine the probability of risk.

Conclusion

Digitization and concern around consumer information have shifted information security leaders from the periphery to an integral business function. Information is the new currency, and security leaders need to effectively partner with the CEO in order to mitigate an organization’s risk while empowering, not hindering, business growth and innovation. Risk quantification gives security leaders the means to map risks associated with a strategy to business outcomes as well as dollars and cents. While we are still in the early days of this emerging field, 2019 will be a pivotal year for the field. As more CEOs become proactive in overseeing their security program, security leaders will need a tool to convey that information effectively and integrate all risk data. With a standard set of tools to communicate risk, security and business leaders can adopt a common language to secure their organizations.

For security teams, the idea of risk is nothing new in fact, most security teams work with risk every day. However, the concept of distilling that risk down into numbers, risk quantification, is a hotly debated issue among information and security professionals. In 2018, in their inaugural Integrated Risk Management Magic Quadrant, Gartner listed risk quantification as a critical capability for integrated risk management solutions. Yet, the way security teams approach risk quantification widely varies from organization to organization. Here we’ll explore why risk quantification is still so ambiguous for many security teams and why it is critical that the industry embrace this as the next step for future success.

A brief history of risk and risk quantification

The modern concept of risk is directly correlated with uncertainty, and uncertainty is correlated with the availability of information. If an individual makes a decision with 100% certainty (or all possible information), there is no risk. Notice there is a difference between possible and available information. While individuals work to assemble all available information, it is almost impossible to assemble all possible information prior to a decision deadline. If we had to know all the possible information to make a decision, we would not be able to get our morning coffee let alone lead a team.

Risk has been an integral part of the business since the modern concept evolved. From contracts in the 16th century to the emergence of lending, business leaders have been taking risks seemingly forever. Until the 17th and 18th centuries, though, the decision to accept or reject that risk was predicated on subjective measures such as personal relationships and word of mouth.

The industry that catalyzed the development of objective risk quantification was, to no surprise, insurance. Critical to their business model, insurance companies innovated new ways to calculate the risks associated with individuals and material objects. In the 20th century, we saw governments begin to call for increased use of risk quantification driven by increasing tensions following nuclearization and the Cold War, the US government needed the means to make calculated decisions moving forward.

Business risk in the modern age

Business is inherently risky as it is predicated on the fact that businesses that survive are doing something different from their competitors. If someone is doing something never done before, they are taking a risk. Looking at the Ansoff Matrix for new product development, we see that teams of any function must embrace some form of risk.

Risk for information and security professionals

We’ve seen before that risk reduction, the primary objective or security teams, is often at odds with business growth. In fact, Bromium reports that 74% of CISO’s see security as the primary hindrance to business growth and innovation . Both of these concepts take risk.

It is not the job of the security team to stand in the way of the rest of the organization and be at odds with the CEO. In fact, these businesses are the ones that stagnate. It is also not the job of the CEO to turn a blind eye to the security risks inherent to business growth.

Both the CEO and security leaders need to be effective at relaying the necessary information to each other: the CEO must effectively convey their ideas and strategy, and the security leader must be able to effectively convey the risks associated with that strategy for the CEO to make a well-informed decision about whether to move forward.

The issue is, that without an objective means to convey the risks associated with the CEO’s strategy, the CISO cannot hold up their end of the relationship.

Barriers to adoption of risk quantification

If risk quantification is so critical to a CISO, why is it so widely debated? The fact is, information security has not been so critical to a company’s bottom line before. Information is the new currency and customers’ trust in an organization’s security of their customers’ information has a direct impact on the bottom line.

We are in uncharted waters in terms of how to but objective numbers around the activities that were previously focused on ensuring that the rest of the organization continued to function.


Risk Quantification Decoded

The MIT CISR breaks the risks managed by information professionals into four categories: agility, accuracy, access, and availability.

Up until the digital revolution, the primary focus of security teams was mostly availability, some access, and pieces of accuracy and agility.

With digitization that has completely shifted. In fact, the role of the CISO now is more focused on agility securing the organization as it rapidly adopts new technologies that are not necessarily secure. This shift has caused the shift in dynamic and the need for risk quantification. Unfortunately for those working to define it, the easiest function to define is availability in the case of business continuity, we can look at what happens in the event of a disaster, how long do processes stop, and what revenue is lost as a result of that breakdown.

However, what happens in the event of a data breach? No servers go down, business is not interrupted, yet stocks tank and bottom lines are slashed. This is the power of reputational risk and why risk quantification in the digital age is so difficult. It has fallen on the information security company to define the risks associated with a company when customers lose faith in a company’s ability to protect their information.

Risk quantification for information security

While the need for concrete risk quantification has emerged, the landscape of frameworks to quantify risk is still fragmented. We’ll take a look at the most popular frameworks to date for risk quantification:

NIST SP 800-30: Originally published in 2002 and updated in 2012, NIST Special Publication 800-30 or NIST Risk Management Framework is built alongside the gold-standard NIST Cybersecurity Framework as a means to view an organization’s security threats through a risk-based lens. The limitations of the NIST RMF is the revision process the revised version published in 2012 is designed for a risk assessment . While that lends itself to risk quantification, it does not directly determine the probability of risks in a fully objective manner.

FAIR Model : Factor Analysis of Information Risk (FAIR) Model is touted as “the only international standard quantitative model for cybersecurity and operational risk”. To date, the FAIR Model has been widely debated in the security community for its approach and ability to quantify risk. Recently, the FAIR Model has moved from obscurity to prominence for those reasons.

World Economic Forum Cyber Risk Framework and Maturity Model : Originally published in 2015, the WEF framework bears similarities to the NIST RMF in its subjectivity. Where the FAIR model is more data drive, the WEF framework relies on human decisions to determine the probability of risk.

Conclusion

Digitization and concern around consumer information have shifted information security leaders from the periphery to an integral business function. Information is the new currency, and security leaders need to effectively partner with the CEO in order to mitigate an organization’s risk while empowering, not hindering, business growth and innovation. Risk quantification gives security leaders the means to map risks associated with a strategy to business outcomes as well as dollars and cents. While we are still in the early days of this emerging field, 2019 will be a pivotal year for the field. As more CEOs become proactive in overseeing their security program, security leaders will need a tool to convey that information effectively and integrate all risk data. With a standard set of tools to communicate risk, security and business leaders can adopt a common language to secure their organizations.

The Hot and the Odd: A Critical View on Innovative Cybersecurity Practices for 2 ...

$
0
0

The Hot and the Odd: A Critical View on Innovative Cybersecurity Practices for 2 ...

Let’s face it. The good old days of hacking are over.

You may remember that period when cybercrime was the work of underfunded individuals operating on their own. Traditional cyber security measures were usually more than enough to block attacks and protect networks and users. What’s more, criminals’ core motivation, money, made their behaviors easy to predict.

Thinking about it retrospectively, it wasn’t that bad, right?

Today’s landscape is less straightforward. Perpetrators are organized and have access to funding and manpower to make enterprises and economies tremble . Financial incentives behind data breaches meet political ambitions . And new categories of devices mean new entry points for attacks .

With thousands of threat events recorded every second and no sign of ceasefire, 2019 might be the best year to take a fresh look at cybersecurity. So what’s hot and an effective use of one’s security budget? And what’s odd and may not be a good fit for you?

This post looks at some innovative cybersecurity practices, considering both the pros and cons of each of them.

Threat Hunting

Why wait for threats to dismantle your IT infrastructure when you can chase them instead and avoid damages? That’s the principle behind threat hunting, the practice of isolating attacks that common security protections are not capable of detecting by themselves.

What’s strong about this practice, besides its rhetoric, is that it can be a significant cost-avoider. Cybercrime has an average annual cost of $11.7M , and the number of recorded security breaches is going up double-digit year after year. So clearly, more work is necessary besides installing firewalls and antiviruses.

Still, it’s advisable to approach this technique with a healthy dose of skepticism, such that it doesn’t become a cover-up. For instance, why is it that so many hacks and scams slip through the cracks? What can be done to reinforce organizational security processes and reduce the need for hunting overall?

Also, how are threat hunting operations going to be run? You may hire internal specialists who can then spend time and tailor efforts to your particular IT network and assets. Or you may rely on external experts working with several clients at once and, therefore, with a broader perspective on emerging threats and the capacity to think outside the box.

Either way, for proper hunting to take place, you will need investigative instruments to carry threat intelligence and detect system vulnerabilities. These tools shall allow you to gather reliable data about the security configurations of your servers, domains and IP addresses, SSL certificates, and more. They should as well enable you to check whether any of your websites may contain malicious content ― in the form of dangerous file extensions, bugged contact forms, or something else.

AI-Powered Cybersecurity

Putting an end to repetitive and boring work. Automating time-consuming tasks. Processing information at speed inconceivable for human beings. These are some of artificial intelligence’s promises you have surely heard about, and they sound encouraging in a cybersecurity context where talent shortage and limited security budgets are recurring constraints.

Without much or any supervision, trained machines could automatically spot signs of attacks such as abnormal network activity. Or they could consistently review all files that have been uploaded or modified in search of malevolent code or scripts designed to, for example, steal confidential information or compromise databases.

While that sounds like a big boost in efficiency compared to carrying these security activities manually, artificial intelligence is a double-edged sword . For instance, cybercriminals could launch bogus attacks at scale with the purpose of mistraining machines before radically changing their approach and go undetected.

Furthermore, the chances are that hackers will be more agile and faster to adopt the latest AI-based processes to execute their fraudulent pursuits than most organizations ― making artificial intelligence a threat as much as it is an asset to better cybersecurity.

Domain Name Monitoring

Criminals need an online presence to proceed with most scams, and that typically involves registering one or several domain names to host a website or sending emails. The good news is that registrars, as required by ICANN , must collect specific information to identify registrants including their contact details and physical location before allocating web addresses.

That data, known as WHOIS records, is then made public and become useful in a variety of ways. For example, email users who received a message from an unknown sender can review details regarding the corresponding domain. Was registration done recently? If so, it might be a sign of fraud as scammers don’t wait long to move forward with phishing and spoofing attempts.

Is information diverging between records and other touchpoints? WHOIS data is verifiable and immutable, whereas domain owners can claim anything on their websites or elsewhere and change it later on.

But there are several issues with domain information. A big one is the scattered nature of WHOIS data since there is one separate record for each address ― making it impractical for organizations whose employees’ interact with hundreds of websites and recipients on a daily basis. Another challenge is that scammers may not provide accurate information about themselves during the registration process.

These problems are mitigated, however, once information is integrated into the form of databases. In that case, it’s possible to run an analysis for thousands or more domains simultaneously. Patterns and connections between malicious domains also emerge when data is centralized, even if the contact details provided for individual records are fake.

Hacking and scamming threats have not become easier to handle over the years, and innovative cybersecurity practices continue to emerge with the hope to tackle them. But no innovation is a silver bullet, and with each new approach comes both advantages and downsides to consider on the way toward better cybersecurity.

AV-TEST给出了适用于Android的最佳安全应用程序名单

$
0
0

防病毒测试实验室AV-TEST最近评估了20种针对Android的移动安全产品性能。毫不奇怪,最好的结果再次由着名供应商开发的行业领先产品提供,包括Bitdefender和卡巴斯基。2018年11月进行的研究,在三个不同领域测试了每个产品,即保护,可用性和功能。每项产品在每次测试中的性能最高可获得6分。

趋势科技移动安全, 腾讯 WeSecure,赛门铁克诺顿移动安全,Sophos移动安全,迈克菲移动安全,卡巴斯基实验室安卓互联网安全,G数据互联网安全和Bitdefender移动安全实现了最高分13分。

这些针对Android的安全应用程序中的每一个都获得了最多六点保护和可用性,以及一点功能。相比之下,谷歌的Play Protect已经整合到Google Play商店中,在获得零点保护,4.5分可用性以及0分功能得分之后,获得了4.5分的最低分。这是唯一未能获得AV-TEST认证的解决方案。

除了Google Play Protect之外,性能最差的是NSHC Droid-X 3,它仅获得3点保护,6点可用性,1点功能。毋庸置疑,这些测试结果可以帮助您选择Android安全应用程序。正如许多Android用户最近发现的那样,只要从受信任的来源下载应用程序,实际上就更容易远离移动平台上的恶意软件。


AV-TEST给出了适用于Android的最佳安全应用程序名单
AV-TEST给出了适用于Android的最佳安全应用程序名单

Security lessons from the House Oversight and Government Reform Committee

$
0
0

The U.S. House Committee on Oversight and Government Reform has more than a few things to say about responsible enterprise application security.


Security lessons from the House Oversight and Government Reform Committee

On Dec. 10, 2018, the House Oversight and Government Reform Committee released a staff report detailing the committee’s 14-month investigation into the 2017 Equifax data breach.The 96-page 35,000-word report is well worth reading in its entirety if you’re interested in how relatively small security missteps can cascade into a major data breach. But for the tl;dr crowd, here are the key lessons I took away from the report.

If you have an aggressive growth strategy, you must have a software security initiative to match

A company’s growth and accumulation of data can result in a complex―and, in some cases, antiquated―IT environment, making software security especially challenging. Even if you recognize the security risks of your legacy systems, moving too slowly to implement a comprehensive software security initiative (SSI) will leave your sensitive data exposed.

When a vulnerability is disclosed, you’re in a race with attackers

On March 7, 2017, a critical vulnerability in the Apache Struts software was publicly disclosed. Security researchers observed a high number of exploitation attempts almost immediately after disclosure. In fact, on the same day of disclosure, information about how to exploit the Apache Struts flaw was posted to several Chinese websites popular with hackers.


Security lessons from the House Oversight and Government Reform Committee

Thousands of organizations were affected, and even though many applied the patch to their systems immediately, the attacks kept coming. All it took to create the conditions for the breach was for one department at one firm to miss patching one custom-built internet-facing consumer dispute portal running a version of Struts containing the vulnerability.

Is open source inherently less secure?

Open source like Apache Struts is not less secure (or more secure) than commercial software, but there are characteristics of open source that make it attractive to attackers when vulnerabilities are disclosed. Unlike commercial software, open source usually does not include a support contract. That means that open source users are responsible for tracking updates for security or functionality. If you aren’t aware of vulnerabilities in the open source you use, you become an easy target for attackers.

Hackers know that many organizations do not properly track the open source they use, as we’ll see below.

You can’t patch what you don’t know about

It can be difficult to maintain adequate software asset management procedures. As the OGRC report describes, even if you have an ongoing initiative to develop a comprehensive inventory of your IT systems, including all components for each system, your inventory might not comprehensive at any given time.

No company is alone in finding it difficult to maintain an accurate list of all components used in their applications, especially when it comes to open source components. The 2018 Open Source Security and Risk Analysis (OSSRA) reported that Black Duck On-Demand audits of over 1,100 commercial codebases found open source components in 96% of the applications scanned, with an average 257 components per application.

Seventy-eight percent of the audited codebases contained at least one vulnerability, with an average 64 vulnerabilities per codebase. Eight percent of the audited codebases were found to contain Apache Struts, and of those, 33% still contained the Struts vulnerability nearly a year after that vulnerability’s disclosure , and months after the famous breach.


Security lessons from the House Oversight and Government Reform Committee

Another important data point found by the scans was that the average age of the vulnerabilities discovered is increasing. On average, vulnerabilities identified in the audits were disclosed nearly six years ago―versus the four years reported in the 2017 OSSRA report―suggesting that those responsible for remediation are taking longer to remediate, if they’re remediating at all, allowing a growing number of vulnerabilities to accumulate in codebases.

Your application security program needs to evolve to be effective

No one technique can find every vulnerability. Static analysis (SAST) is essential for detecting security bugs―SQL injection, cross-site scripting, buffer overflows―in proprietary code. Dynamic analysis (including DAST, IAST , and fuzz testing ) is needed for detecting vulnerabilities stemming from application behavior and configuration issues in running applications.

But with the growth in open source use, organizations also need to ensure that software composition analysis (SCA) is in their application security toolbelts. With the addition of SCA, organizations can effectively detect vulnerabilities in open source components as they manage whatever license compliance their use of open source may require.

As the final line of the OGRC report states, “Private sector companies, especially those holding sensitive consumer data,… must prioritize investment in modernized tools and technologies.”


Security lessons from the House Oversight and Government Reform Committee

Learn more about SCA

New VMware Security Advisory VMSA-2018-0031

$
0
0

Today, VMware has released the following new security advisory:

“VMSA-2018-0031 vRealize Operations updates address a local privilege escalation vulnerability ”

This documents the remediation of an important severity local privilege escalation vulnerability ( CVE-2018-6978 ) in vRealize Operations (vROps). The issue exists due to improper permissions of support scripts. Admin ** user of the vROps application with shell access may exploit this issue to elevate the privileges to root on a vROps machine.

**The admin user (non-sudoer) should not be confused with root of the vROps machine.

We would like to thank Alessandro Zanni, pentester at OVH for reporting this issue to us.

Please sign up to the Security-Announce mailing list to receive new and updated VMware Security Advisories.

Customers should review the security advisories and direct any questions toVMware Support.

Species richness analysis with R and sf

$
0
0

Este contenido está disponible en espaolaquí.

Counting how many underlying features intersect a layer of interest is a very common geoprocessing task, and has always been possible with GIS software. I’ve been getting into the sf package to do spatial analyses in R, and wanted to document and share this approach for quantifying and plotting species richness.

The code below is a fully reproducible (as long as the relevant packages are installed). I chose Costa Rica as an example, and rather than downloading and reading distribution data for actual species we will generate random points iteratively. Shapefiles can be read easily into sf using st_read . Here, I calculated gridded richness for point data first and then for convex hulls because those are some of the most common approaches used nowadays.

These are the main steps in the process:

Setup subset a world map to get a single-country polygon generate a random number of random points within the country for n different ‘species’ create smoothed convex hulls around each set of points Geoprocessing generate a grid to cover our country polygon intersect and join the multipoint feature set and the convex hulls with the grid (separately) Plotting plot the richness grids using perceptually-uniform and colorblind-friendly palettes using ggplot , scico and sf

The different spatial elements look like this:

The blank map


Species richness analysis with R and sf

Randomly generated points for n ‘species’


Species richness analysis with R and sf

Smoothed convex hulls around the sets of points


Species richness analysis with R and sf

Grid for calculating richness


Species richness analysis with R and sf

Gridded richness for points


Species richness analysis with R and sf

Gridded richness for convex hulls


Species richness analysis with R and sf
Notes:

The rerun function from purrr is awesome, I hadn’t seen it before but it is a very cool replacement for most basic for loops. I don’t know how I missed it.

When plotting, we use a bounding box and and the st_touches function with some tidyverse magic (slicing and plucking) to get the adjacent countries so we can add more context to our plot and focus in on our polygon of interest without having to set up the limits manually.

We counted the number of intersecting features using a grid, but this can also be done using political boundaries (states, municipalities, counties), vegetation types or any other layer of interest.

Any feedback is welcome.

R code:

# load packages library(sf) library(dplyr) library(ggplot2) library(scico) library(rnaturalearth) library(purrr) library(smoothr) # get a world map worldMap <- ne_countries(scale = "medium", type = "countries", returnclass = 'sf') # filter country CRpoly <- worldMap %>% filter(sovereignt=="Costa Rica") # generate random points, then name each list element sp_occ <- rerun(12,st_sample(CRpoly,sample(3:20,1))) names(sp_occ) <- paste0("sp_",letters[1:length(sp_occ)]) # to sf, with column of element names sflisss <- map(sp_occ,st_sf) %>% map2(.,names(.),~mutate(.x,id=.y)) #%>% sp_occ_sf <- sflisss %>% reduce(rbind) # to multipoint sp_occ_sf <- sp_occ_sf %>% group_by(id) %>% summarise() # set up bounds limsCR <- st_buffer(CRpoly,dist = 0.7) %>% st_bbox() # context adjacentPolys <- st_touches(CRpoly,worldMap) neighbours <- worldMap %>% slice(pluck(adjacentPolys,1)) # blank map divpolPlot <- ggplot()+ geom_sf(data=neighbours,color="white")+ geom_sf(data=CRpoly)+ coord_sf(xlim = c(limsCR["xmin"], limsCR["xmax"]), ylim = c(limsCR["ymin"], limsCR["ymax"]))+ scale_x_continuous(breaks = c(-84))+ theme(plot.background = element_rect(fill="#f1f2f3"), panel.background = element_rect(fill="#2F4051"), panel.grid = element_blank(), line = element_blank(), rect = element_blank()) # plot points spPointsPlot <- ggplot()+ geom_sf(data=neighbours,color="white")+ geom_sf(data=CRpoly)+ geom_sf(data=sp_occ_sf,aes(fill=id),pch=21)+ scale_fill_scico_d(palette = "davos",direction=-1,end=0.9,guide=FALSE)+ coord_sf(xlim = c(limsCR["xmin"], limsCR["xmax"]), ylim = c(limsCR["ymin"], limsCR["ymax"]))+ scale_x_continuous(breaks = c(-84))+ theme(plot.background = element_rect(fill="#f1f2f3"), panel.background = element_rect(fill="#2F4051"), panel.grid = element_blank(), line = element_blank(), rect = element_blank()) # smoothed convex hulls spEOOs <- st_convex_hull(sp_occ_sf) %>% smooth() # plot hulls hullsPlot <- ggplot()+ geom_sf(data=neighbours,color="white")+ geom_sf(data=CRpoly)+ geom_sf(data=spEOOs,aes(fill=id),alpha=0.7)+ scale_fill_scico_d(palette = "davos",direction=-1,end=0.9,guide=FALSE)+ coord_sf(xlim = c(limsCR["xmin"], limsCR["xmax"]), ylim = c(limsCR["ymin"], limsCR["ymax"]))+ scale_x_continuous(breaks = c(-84))+ theme(plot.background = element_rect(fill="#f1f2f3"), panel.background = element_rect(fill="#2F4051"), panel.grid = element_blank(), line = element_blank(), rect = element_blank()) # grid CRGrid <- CRpoly %>% st_make_grid(cellsize = 0.2) %>% st_intersection(CRpoly) %>% st_cast("MULTIPOLYGON") %>% st_sf() %>% mutate(cellid = row_number()) # calculate n per grid square for points richness_grid <- CRGrid %>% st_join(sp_occ_sf) %>% group_by(cellid) %>% summarize(num_species = n()) # calculate n per grid square for hulls richness_gridEOO <- CRGrid %>% st_join(spEOOs) %>% group_by(cellid) %>% summarize(num_species = n()) # blank grid gridPlot <- ggplot()+ geom_sf(data=neighbours,color="white")+ geom_sf(data=CRpoly)+ geom_sf(data=CRGrid)+ coord_sf(xlim = c(limsCR["xmin"], limsCR["xmax"]), ylim = c(limsCR["ymin"], limsCR["ymax"]))+ scale_x_continuous(breaks = c(-84))+ theme(plot.background = element_rect(fill="#f1f2f3"), panel.background = element_rect(fill="#2F4051"), panel.grid = element_blank(), line = element_blank(), rect = element_blank()) # plot gridded richness gridRichCR <- ggplot(richness_grid)+ geom_sf(data=neighbours,color="white")+ geom_sf(data=CRpoly,fill="grey",size=0.1)+ geom_sf(aes(fill=num_species),color=NA)+ scale_fill_scico(palette = "davos",direction=-1,end=0.9)+ coord_sf(xlim = c(limsCR["xmin"], limsCR["xmax"]), ylim = c(limsCR["ymin"], limsCR["ymax"]))+ scale_x_continuous(breaks = c(-84))+ theme(plot.background = element_rect(fill="#f1f2f3"), panel.background = element_rect(fill="#2F4051"), panel.grid = element_blank(), line = element_blank(), rect = element_blank())+labs(fill="richness") # richness based on hulls gridRichCR_eoo <- ggplot(richness_gridEOO)+ geom_sf(data=neighbours,color="white")+ geom_sf(data=CRpoly,fill="grey",size=0.1)+ geom_sf(aes(fill=num_species),color=NA)+ scale_fill_scico(palette = "davos",direction=-1,end=0.9)+ coord_sf(xlim = c(limsCR["xmin"], limsCR["xmax"]), ylim = c(limsCR["ymin"], limsCR["ymax"]))+ scale_x_continuous(breaks = c(-84))+ theme(plot.background = element_rect(fill="#f1f2f3"), panel.background = element_rect(fill="#2F4051"), panel.grid = element_blank(), line = element_blank(), rect =

Links to help me monitor my Zeverlution PV converter locally

$
0
0

Since it’s my data, I’d rather be in control myself, so here are some links that will help me going around the Zevercloud solution I posted about yesterday.

Raspberry Pi based: Parse RS485 data eversolar-monitor/Introduction.md at wiki solmoller/eversolar-monitor (including suitable RS485 hardware ) solmoller/eversolar-monitor: Script to capture data and create statistics from Eversolar/zeversolar Solar Inverters. Includes easy install image files for Raspberry Pi. Working edition since 2012 :-) Upload to Domoticz Issue #14 solmoller/eversolar-monitor Some merges still need to be done: Network Graph solmoller/eversolar-monitor In case of connection trouble: No inverters connected Issue #13 solmoller/eversolar-monitor Zeversolar to RS485 to UART adapter Issue #16 solmoller/eversolar-monitor The above started out as this: [ WayBack ] Eversolar Inverter Monitoring with linux | Steve’s Home Page page 1 [ WayBack ] Eversolar Inverter Monitoring with Linux | Steve’s Home Page page 2 [ WayBack ] Google Code Archive Long-term storage for Google Code Project Hosting. Parse Zeverlution web-server smeyn/ZeverSolar: python reader from a Zever Solar web server Arduino based: nrw505/inverter-monitor: Device to monitor Eversolar/Zeversolar inverters

Some other interesting links of software supporting Zeversolar devices:

pvlib/pvlib-python: A set of documented functions for simulating the performance of photovoltaic energy systems.

Note that PVoutput.org does have native ZeverCloud updating using the ZeverSolar API key:

[ WayBack ] Zeversolar auto Updater Auto Uploader PVOutput Community [ WayBack ] 2017-04-20 Zevercloud Auto Uploader What’s New PVOutput Community

But these might help me:

[ WayBack ] GitHub solmoller/eversolar-monitor: Script to capture data and create statistics from Eversolar/zeversolar Solar Inverters. Includes easy install image files for Raspberry Pi. Working edition since 2012 :-) [ Archive.is ] eversolar-monitor/Introduction.md at wiki solmoller/eversolar-monitor GitHub [ WayBack ] GitHub smeyn/ZeverSolar: Python reader from a Zever Solar web server [ WayBack ] Zeversolar pv inverter Domoticz [ Archive.is ] Zeversolar auto Updater Auto Uploader / Zevercloud PVOutput Community https://www.dropbox.com/s/noya0yaf9suxs59/Solarcloud-API-guide_En_v20141114.pdf?dl=0 [ WayBack ] PV statistics for ROI | Steve’s Blog [ Archive.is ] FAQ Zeversolar

of which I already bumped into

[ WayBack ]

AI-powered cybersecurity ― or how to avoid becoming the next shocking data brea ...

$
0
0

Artificial intelligence has supercharged cybersecurity, with faster, smarter ways to identify and analyze threats in real time ― and take them down fast, letting you avoid disaster. Join this VB Live event to learn more about how AI-powered security can lock down your data, improve privacy, protect the enterprise and more.

Register here for free.

“Identity is back on the front page, as people are starting to understand that stolen identity is the number one security issue out there,” says Jim Ducharme, VP of Identity Products at RSA. “Compromised credentials is the weak link in the security armor, but there are lot of good technical advancements in the market.”

Artificial intelligence is the key, Ducharme says. It allows us to go beyond some of the less scalable ways of protection, with its ability to scan enormous data sets to detect complex attacks and changing attack patterns, and then adapt to them.

“For over a decade, AI and machine learning has demonstrated it can do a better job of fraud detection,” he says. “It’s proven to work in the world of security, particularly in advanced fraud. Now we need to take a lot of the same principles and apply them to securing other things.”

For instance, enterprise access ― is this person who they claim to be? It’s time to go past basic security strategies and the way we think about security. The “I know your mother’s maiden name, so it must be you” world, and think about ways AI can supplement the safeguards currently in place.

“It’s not that companies who have experienced breaches didn’t care about security or didn’t have controls in place to protect their data,” he says. “The reality is, the threat actors found ways around those static controls to get to that data. But that’s where AI comes in, to add a layer above that static control.”

He offers the example of credit and debit card transactions: Why is it that a 4-digit PIN is good enough to protect your bank account?

“Here in the enterprise, my password has to be at least eight characters, have a special character, an uppercase letter, a number, and I change it every 60 days,” he says. “While my debit card is protected by a 4-digit PIN, and I haven’t changed that password since I first set it when I was in high school.”

And that PIN can be guessed pretty easily ― there are only a thousand combinations, and it’s probably either your birthday, your kid’s birthday, or a sequential set of numbers.

“But the beauty is, behind that PIN, behind that piece of plastic, is AI and machine learning fraud detection,” he says. “It’s asking, is this your normal pattern of behavior? Did you just buy a Ferrari with your debit card?”

AI-powered fraud detection goes beyond the simple static controls to look for things that don’t make sense ― you had the right PIN and you seem to have the card, but this doesn’t smell right.

Fraud departments are the best way to see the power of AI day in and day out, Ducharme says, with the technology on the back end detecting fraud in real time. The next level is the enterprise case.

If someone logs into the enterprise server on a device they’ve never used or in an unknown location, that odd pattern can be flagged, and an identity challenge issued.

If you go back to any corporate data breach example, if somebody’s extracting the entire database, AI and machine learning would note that this user does have access to the system, with the right credentials, but it looks like they’ve just downloaded every customer’s data, and that just doesn’t seem to match their normal pattern of user behavior.

“The good news is, most companies have realized that things like usernames and passwords are easily compromised ― they recognize the weak link,” says Ducharme. “Too many times the mistake is, they think the way in which they have to add additional layers of security is just putting an additional burden on the end user to protect their information.”

It results in what he calls the Fort Knox paradox, in which to protect your cloud data, companies make their employees log in via a VPN, so that they can’t get to a cloud resource without going through the enterprise, which defeats the purpose, and ensures you can’t get rid of your infrastructure cost, the way moving to the cloud was supposed to do. Or you require your users to change their password every 30 days instead of every 60, or you up the required complexity and so on, making controls more labyrinth without adding any significant security benefit. And almost always ending in users finding workarounds that defeat the purpose entirely, like the written-down password epidemic.

“It took me half an hour to create a password that worked with a bank’s password policy, because it was so complicated,” he says. “What did I have to do? I had to write it down on a post-it note. How secure is that, right? Who’s it really protecting? That’s the problem it creates.”

He cites the local cable provider with all the passwords for the systems he needed access to laminated onto his laptop; or the fire station with passwords for the state fire systems displayed on the wall, next to the system’s URL. Or the retail store with passwords to all of the store systems underneath the keyboard.

“The antithesis of that: I encourage customers to think about that information they think is so critical to their enterprise, how would they protect it with a 4-digit PIN?” he says. “Again, that leads into the discussion of machine learning and AI.”

It means shifting the burden off of the user, reducing friction on the front end, and putting security control on the back end, where it belongs.

There are a huge number of tools that cover everything from fraud to identity assurance, Ducharme says, but before you even consider tools, determining assurance levels is the first place to start.

“I used to use the example of our former president at RSA, Amit Yoran,” he says. “He always used to wear a black shirt and black pants. I said, if you think about it, our security team knows it’s Amit when he walks in. They do some recognition. There’s information about what he’s wearing that gives us the assurance it’s him. In an enterprise setting, I encourage folks to look at that as well.”

Step one, get out of your silo and look across the organization at sources of information that allow you to make a decision about how to tell if a person is who they say they are. Look at your data and applications and determine who is supposed to have access, and what would make it strange for them to be there. What would give you the assurance a user is who they say they are, this is what they should be doing, and if they’re doing it right?

It’s behaviorally based, he explains, and starts with something as simple as the devices they’re using, the locations that they’re coming from, and the networks they’re on. From there, go to behavioral patterns: Let’s take a look at Jim’s behavior and see if this is consistent with his previous patterns.

If Sally, tomorrow, logged into the system from St. Petersburg, Russia, would that raise an eyebrow? What else would raise an eyebrow? What if Sally showed up with a mustache? What if Amit showed up in a three-piece suit?

There are also three different dimensions to consider: identity assurance, access assurance, and activity assurance. Identity assurance is, do we know this person is who they claim to be: Is it Jim? Access assurance is, do we understand what he has access to: What can Jim do? Let’s say Jim is a developer. Should he have access to production systems? Jim’s a bank teller. Should he have access to the full vault?

Then there’s activity assurance. Is Jim doing what Jim should be doing? Is it normal for Jim to download every customer record?

It’s not just information that makes you raise your eyebrow, but information that would give you more certainty or assurance that that person is who they say they are.

“Those are all the things you want to feed into that contextual-based AI and machine learning algorithm,” he says. “You’ll start making these connections across your enterprise, and that’s going to be the fuel that feeds your AI and machine learning engine.”

This step is essential, even as just a thought experiment, he adds. These problems need to be thought about in new ways, and approached with a different mindset, or it’s too easy to fall back on patterns of defining the static policies that got you in trouble in the first place. A static control that says if a transaction is over $50,000, you throw up an identity challenge just means the fraudsters will rob you 20 cents at a time, 250,000 times.

Initiating an AI-powered cybersecurity strategy really is as easy as that, he says.

“The biggest barrier to AI and machine learning is that it’s not the black magic that people think it is,” says Ducharme. “It’s complicated, but it’s approachable. Otherwise we’ll be living with these horrible passwords and messes like that for a while.”

To learn more about planning and launching a 21st-century cybersecurity strategy, what cybersecurity specialists need to know about the tools and infrastructure required to add AI and machine learning to their security mix and more, don’t miss this VB Live event!

Don’t miss out!

Register here for free now.

Attend this webinar and learn: How AI is defeating and preventing cyberattacks When AI analytics need to be deployed and for what reason How to build AI-powered tools that can assure consumers their data is secure Real-world AI applications and what they mean for cybersecurity Speakers: Jim Ducharme , VP of Identity Products, RSA Dave Clark , Host, VentureBeat

More speakers to be announced soon!

Viewing all 12749 articles
Browse latest View live