Quantcast
Channel: CodeSection,代码区,网络安全 - CodeSec
Viewing all 12749 articles
Browse latest View live

Maybe we have the cybersecurity we deserve

$
0
0

Three-hundred and twenty-seven million Marriott user accounts compromised. 100 million at Quora. 148 million from Equifax. Those all pale in comparison to the 3 billion user accounts compromised from Yahoo in 2013 .

Ask yourself this: do you find yourself becoming outraged or saying “ho-hum” every time you hear about the latestrecord data breach? Society seems to be agreeing with the latter answer.

I was recently sitting in a room with some of the world’s brightest minds at a Secure Technology Alliance consortium meeting in Washington, DC, trying to figure out how to better authenticate and secure our digital world. It was easily the most-brains-and-experience-per-square-foot meeting I’ve ever been in focusing on better and more pervasive authentication.

Many of the presenters talked about how bad things are today, with continuedphishing and unpatched software making Swiss cheese of most organizations’ security defenses. This is despite myriad competing great authentication solutions, which are undermined by seemingly indifferent users.

“Why don’t users care more about security?” was a common question asked during breaks. Many other presenters pointed out that many of the problems each of us were pointing out were the same problems 30 years ago. It was a room full of people dedicated to figuring out the remaining hard problems and trying to get the right authentication solutions developed and standardized.


HCC Embedded Achieves ISO 27001 Certification

$
0
0

HCC takes proactive step to mitigate risk and manage information security

BUDAPEST, Hungary (BUSINESS WIRE) lt;a href=”https://twitter.com/hashtag/AdvancedEncryptionModule?src=hash” target=”_blank”gt;#AdvancedEncryptionModulelt;/agt; HCC Embedded (HCC), experts in high-quality software components for deeply embedded systems, today announced it has been awarded ISO/IEC 27001:2013 certification, one of the most widely recognized and internationally accepted information security standards. This certification reflects both HCC’s long-term commitment to quality management principles and its expertise in managing risk and protecting data on behalf of the company and its customers.


HCC Embedded Achieves ISO 27001 Certification

With the rising frequency of data breaches, security lapses, and cyber attacks, the ISO family of standards for managing information security has become increasingly important. HCC is building up its safety processes to serve the growing demands of industries such as automotive that require ISO 26262 compliance and demand proper processes for software development. All these standards require that companies developing to them are built on sound and auditable processes that manage all aspects of risk within a system of continuous improvement.

ISO 27001 uses a risk-based approach that identifies requirements and specifications for a comprehensive Information Security Management System (ISMS). The standard defines how organizations should manage information securely, including applicable security controls. To achieve this certification, an independent audit firm validated HCC’s security compliance and completed a rigorous process, in which HCC demonstrated an ongoing systematic approach to managing and protecting company and customer data. The audit process covered areas such as risk management procedures, threat mitigation, loss prevention, access control, physical security, and security practices.

“We continue to take HCC products to ever higher levels of quality and as part of this we have formalized our safety and security processes,” said HCC CEO Dave Hughes. “By pursuing and achieving the stringent ISO 27001 certification, we have gone above and beyond the required controls to mitigate risk and keep sensitive data secure and protected. We are uniquely building our company on auditable processes associated with risk management and data security, reflecting our quality commitment to customers.”

HCC was founded on quality principles and attained ISO 9001:2015 quality management system certification in 2017. Achieving ISO 27001 certification further strengthens HCC’s commitment to quality, reassures customers that their data and products will be secure, and brings HCC into line with the new European General Data Protection Regulation for protecting data privacy.

For more information, visit: https://www.hcc-embedded.com/embedded-systems-misra/quality-overview

About HCC Embedded

HCC Embedded develops deeply embedded software components “out of context,” which ensures that they can be used as core elements of any system, including those engineered to meet stringent requirements for safety, quality, and portability. Built on a foundation of quality, HCC has a product portfolio of more than 250 embedded components, with deep competencies in reliable flash management, fail-safe file systems, IPv4/6 networking stacks with associated security protocols, as well as a comprehensive suite of USB host and function software. Since 2002, HCC has supplied these embedded software components to over 2,000 companies globally in a wide range of industries including industrial, medical, and automotive.

Contacts

Hughes Communications, Inc.

Angie Hatfield, Media Relations

+1-425-941-2895

angie@hughescom.net

HCC Embedded

Orsolya Eszterváry, Marketing

+36-70-904-7620

orsolya.esztervary@hcc-embedded.com
HCC Embedded Achieves ISO 27001 Certification
Do you think you can beat this Sweet post? If so, you may have what it takes to become a Sweetcode contributor...Learn More.

Most home routers lack simple Linux OS hardening security

$
0
0

Most home routers lack simple Linux OS hardening security

More disconcerting news for router owners a new assessment of 28 popular models for home users failed to find a single one with firmware that had fully enabled underlying security hardening features offered by linux.

CITL (Cyber Independent Testing Laboratories) says it made this unexpected discovery after analysing firmware images from Asus, D-Link, Linksys, Netgear, Synology, TP-Link and Trendnet running versions of the Linux kernel on two microprocessor platforms, MIPS and ARM.

The missing security protections included Address Space Layout Randomization (ASLR), Data Execution Prevention (DEP), and RELocation Read-Only (RELRO).

Granted, this will sound like a jumble of technical terms to most router owners, but in modern operating systems this layer of security should matter.

Linux pioneered features such as ASLR (windows added it to Vista in 2007), taking advantage of the memory segmentation features of modern CPUs via something called the NX bit (no-execute).

As its name suggests, ASLR protects against buffer overflow attacks by randomising where system executables are loaded into memory (so attackers don’t know where they are).

Meanwhile, its relative, DEP, is a way of stopping malware from executing from system memory in use by the OS.

The point of security hardening like this is to make it harder for attackers to exploit software flaws as and when they are found.

How does this affect routers?

Router makers base their firmware on a version of the Linux kernel atop which they implement proprietary extensions.

In principle, there is nothing stopping them from implementing features such as ASLR, but according to CITL that doesn’t seem to have been happening.

For ASLR, all models assessed achieved a low score ranging from 0% use to almost 9% in one case, with most around half of that. With the exception of a Linksys model that scored 95%, RELRO implementation wasn’t much better.

For comparison, Ubuntu 16.04 LTS implemented ASLR on 23% of its executables and RELRO protection on 100%.

MIPS vulnerability

A clue as to why this is happening could be the particularly weak scores of the 10 routers running MIPS for protections such as DEP.

This included a weakness in Linux kernels between 2001 and 2016 relating to the implementation of floating-point emulation. The researchers also noticed a potential security-hardening bypass introduced by a 2016 kernel patch.

We also observe a significant lag in adoption of the latest Linux kernels, and related compiler toolchains, in many MIPS devices including end user devices.

The Linux kernel version shouldn’t in itself result in poor security hardening (most of which have been around for many years in Linux) but it does suggest the firmware used by many of these routers was developed at a time when security was a lower priority.

Indeed, the same issue might explain why so many routers still run on the MIPS, an aging platform left over from the early 2000s and Broadcom’s Wi-Fi reference design which came bundled with its chips. For MIPS, the researchers advise:

We believe consumers should avoid purchasing products built on this architecture for the time being.

CITL argues that although ARM-based routers are a more secure choice, even here the security hardening varies widely within the same vendor’s products.

Should we be worried?

Yes, and no. Yes, because a router lacking these basic protections is inherently less secure but no because even if this was fixed, there are still many other security problems within routers for attackers to aim at.

For instance, the router industry has a mixed reputation for fixing security vulnerabilities when they are discovered, in some cases apparently abandoning some models (and their users) to their fate.

In fairness, when it comes to patching, the router industry has improved a lot. However, CITL’s analysis suggests more fundamental work still lies ahead.

Examining the Tweeting Patterns of Prominent Crossfit Gyms

$
0
0
A. Introduction

The growth of Crossfit has been one of the biggest developments in the fitnessindustry over the past decade. Promoted as both a physical exercise philosophy and also as a competitive fitness sport, Crossfit is a high-intensity fitness program incorporating elements from several sports and exercise protocols such as high-intensity interval training, Olympic weightlifting, plyometrics, powerlifting, gymnastics, strongman, and so forth. Now with over 10,000 Crossfit affiliated gyms (boxes) throughout the United States, the market has certainly become more saturated and gyms must initiate more unique marketing strategies to attract new members. In this post, I will investigate how some prominent Crossfit boxes are utilizing Twitter to engage with consumers. While Twitter is a great platform for news and entertainment, it is usually not the place for customer acquisition given the lack of targeted messaging. Furthermore, unlike platforms like Instagram,Twitter is simply not an image/video centric tool where followers can view accomplishments from their favorite fitness heroes, witness people achieving their goals, and so forth. Given these shortcomings, I wanted to understand how some prominent Crossfitboxes are actually using their Twitter accounts.

B. Extract Data From Twitter

We begin by extracting the desired data from Twitter using the rtweet package in R. There are six prominent Crossfit gyms whose entire Twitter timeline we will use. To get this data, I looped through a vector containing each of their Twitter handles and used theget_timeline function to pull the desired data. Notice that there is a user defined function called add_engineered_features that is used to add a number of extra date columns. That function is available on my GitHub page here .

library(rtweet)
library(lubridate)
library(devtools)
library(data.table)
library(ggplot2)
library(hms)
library(scales)
# set working directory
setwd("~/Desktop/rtweet_crossfit")
final_dat.tmp <- list()
cf_gyms <- c("reebokcrossfit5", "crossfitmayhem", "crossfitsanitas", "sfcrossfit", "cfbelltown", "WindyCityCF")
for(each_box in cf_gyms){
message("Getting data for: ", each_box)
each_cf <- get_timeline(each_box, n = 3200) %>% data.table()
each_cf$crossfit_name <- each_box
suppressWarnings( add_engineered_dates(each_cf, date_col = "created_at") )
final_dat.tmp[[each_box]] <- each_cf
message("")
}
final_dat <- rbindlist(final_dat.tmp)
final_dat$contains_hashtags <- ifelse(!is.na(final_dat$hashtags), 1, 0)
final_dat$hashtags_count <- lapply(final_dat$hashtags, function(x) length(x[!is.na(x)])) C. Exploratory Data Analysis

Let us start by investigating this data set to get a better understanding of trends and patterns across these various Crossfit boxes. The important thing to note is that notall these twitter accounts are currently active. We can see that crossfitmayhem, sfcrossfit, and WindyCityCF are the only ones who remain active.

final_dat[, .(min_max = range(as.Date(created_at))), by=crossfit_name][,label := rep(c("first_tweet","last_tweet"))][] C1. Total Number of Tweets

Sfcrossfit, which is the oldest of these gyms, has the highest number of tweets. However, when looking at the total number of tweets per active month, they were less active than two other gyms.

## total number of tweets
p1 = final_dat[, .N, by=crossfit_name] %&amp;gt;%
ggplot(., aes(x=reorder(crossfit_name, N), y=N)) + geom_bar(stat='identity', fill="steelblue") +
coord_flip() + labs(x="", y="") + ylim(0,3000) + ggtitle("Total Number of Tweets") +
theme(plot.title = element_text(hjust = 0.5),
axis.ticks.x = element_line(colour = "black"),
axis.ticks.y = element_line(colour = "black"))
## number of tweets per active month
p2 = final_dat[, .(.N, start=lubridate::ymd_hms(min(created_at)), months_active=lubridate::interval(lubridate::ymd_hms(min(created_at)), Sys.Date()) %/% months(1)), by=crossfit_name][,
.(tweets_per_month = N/months_active), by=crossfit_name] %&amp;gt;%
ggplot(., aes(x=reorder(crossfit_name, tweets_per_month), y=tweets_per_month)) +
geom_bar(stat='identity', fill="steelblue") + coord_flip() + labs(x="", y="") + ylim(0,32) +
theme(plot.title = element_text(hjust = 0.5),
axis.ticks.x = element_line(colour = "black"),
axis.ticks.y = element_line(colour = "black")) +
ggtitle("Total Number of Tweets per Active Month")
## add both plots to a single pane
grid.arrange(p1, p2, nrow=1)
Examining the Tweeting Patterns of Prominent Crossfit Gyms

C2. Total Number of Tweets Over Time

The time series for the total number of tweets by month shows that each gym had one or two peaks from 2012 through 2016 where they were aggressively sharing content with their followers. However, over the past two years, each gym has reduced their twitter usage significantly.

## total number of tweets by month
final_dat[, .N, by = .(crossfit_name, created_at_YearMonth)][order(crossfit_name, created_at_YearMonth)][,
created_at_YearMonth := lubridate::ymd(paste(created_at_YearMonth, "-01"))] %&amp;gt;%
ggplot(., aes(created_at_YearMonth, N, colour=crossfit_name)) + geom_line(group=1, lwd=0.6) +
facet_wrap(~crossfit_name) + labs(x="", y="") + theme(legend.position="none") +
theme(plot.title = element_text(hjust = 0.5),
axis.ticks.x = element_line(colour = "black"),
axis.ticks.y = element_line(colour = "black"),
strip.text.x = element_text(size = 10)) +
ggtitle("Total Number of Tweets")
## total number of tweets by year
ggplot(data = final_dat,
aes(lubridate::month(created_at, label=TRUE, abbr=TRUE),
group=factor(lubridate::year(created_at)), color=factor(lubridate::year(created_at))))+
geom_line(stat="count") + geom_point(stat="count") +
facet_wrap(~crossfit_name) + labs(x="", colour="Year") + xlab("") + ylab("") +
theme(plot.title = element_text(hjust = 0.5),
axis.ticks.x = element_line(colour = "black"),
axis.ticks.y = element_line(colour = "black"),
strip.text.x = element_text(size = 10)) +
ggtitle("Total Number of Tweets by Year")
Examining the Tweeting Patterns of Prominent Crossfit Gyms
Examining the Tweeting Patterns of Prominent Crossfit Gyms

C3. Tweeting Volume by Year, Month, and Day

For each Crossfit gym, I plotted the volume of tweets by year, month, and day. Oddly enough, there really are not any noticeable patterns in these charts.

## years with the highest number of tweets
ggplot(final_dat, aes(created_at_Year)) + geom_bar(fill="steelblue") +
facet_wrap(~crossfit_name) + labs(x="", y="") +
theme(plot.title = element_text(hjust = 0.5),
axis.ticks.x = element_line(colour = "black"),
axis.ticks.y = element_line(colour = "black"),
strip.text.x = element_text(size = 10)) + ylim(0,800) +
ggtitle("Total Number of Tweets by Year")
## months with the highest number of tweets
final_dat[, created_at_YearMonth2 := lubridate::ymd(paste(created_at_YearMonth, "-01"))][] %&amp;gt;%
ggplot(., aes(lubridate::month(created_at_YearMonth2, label=TRUE, abbr=TRUE))) + geom_bar(fill="steelblue") +
facet_wrap(~crossfit_name) + labs(x="", y="") +
theme(plot.title = element_text(hjust = 0.5),
axis.ticks.x = element_line(colour = "black"),
axis.ticks.y = element_line(colour = "black"),
strip.text.x = element_

SoK: Security Evaluation of Home-Based IoT Deployments

$
0
0

出处: S&P’19

作者: Omar Alrawi、Chaz Lever、Manos Antonakakis、Fabian Monrose

单位: Georgia Institute of Technology

原文: https://www.computer.org/csdl/proceedings/sp/2019/6660/00/666000a208.pdf

介绍

智能家居在安全方面一直表现得不尽人意,究其原因,在于IoT系统相对于传统的嵌入式系统,还引入了智能终端和网络,这就导致了其本身暴露了更多的攻击面。本文通过总结大量论文来帮助研究人员和从业者更好的理解针对智能家居的攻击技术,缓解措施,以及利益相关者应该如何解决这些问题。最后作者利用这些方法评估了45款智能家居设备,并将实验数据公布在 https://yourthings.info%E3%80%82

方法论 抽象模型
SoK: Security Evaluation of Home-Based IoT Deployments
V: A(apps)、C(cloud)、D(devices) E: communication 安全特性 攻击面

Device

Vulnerable services Weak authentications Default configurations(出厂设置)

Mobile application (Android, iOS)

Permissions: over-privileged Programming: 密码学误用 Data protection: API keys, passwords, hard-coded keys

Communication (local, Internet)

Encryption MITM

Cloud

Vulnerable services Weak authentications Encryption 缓解措施 patching framework: 重构 利益相关 vendors end-user

其实还可以细分,芯片厂商,物联网平台,经销商,第三方的开发者等,来定义谁来负责解决谁的问题。

分类的方法 Merit: 创新性、有效性 Scope: 集中在讨论安全性(攻击性和防御性) Impact: 影响力 Disruption: 揭示了一个新的领域 威胁模型

只考虑Internet protocol network attacker,不考虑low-energy based devices,作者认为攻击所需要的资源在大多数家庭都没有。同时如果能hacking hub devices,就默认exploit了所有的low-energy based devices。(这里就限制了讨论的范围)

相关的研究
SoK: Security Evaluation of Home-Based IoT Deployments
Device

Attack Vectors 设备上暴露的引脚可以让攻击者轻而易举的获得权限,不安全的配置会加剧漏洞的产生, 而缺少或弱的身份认证是最容易出现的问题,这些都导致设备上的安全问题被频繁曝出。

August Smart Lock,硬编码的密钥、debug接口 cloud-based cameras,强口令但是是mac地址的base64编码 Sonos device,在高端口开了后门服务,并且没有认证 厂商集成第三方库的安全使得其很难保证整体的安全性 Philips Hue device,通过侧信道攻击得到master key,配合协议的漏洞完成蠕虫

Mitigations 要想解决以上问题,就要求vendor通过设备更新来打patch,要求security by design。

Fear and logging in the internet of things SmartAuth,识别IoT应用的权限,这个主要是针对SmartThings和Apple Home FlowFence,把应用分成sensitive和non-sensitive两部分,这部分需要开发者来做。

Stakeholders Vendors有责任patch和update有漏洞的设备,但也要授权给end-user一定的权限,比如可以关闭某些有问题的服务。

SmartAuth提供一种可以导出认证规则的方式,但只能vendor来做。 Sonos device允许用户使用网络隔离的方式来缓解漏洞。 Mobile Application

Attack Vectors over-privileges、programming error、hard-coded sensitive information

August Smart Lock,作者用敏感信息dump密钥 IoTFuzzer,利用app来对设备做fuzzing,当然也可以利用app做攻击 用app来收集设备的有关信息,然后重新配置路由器的防火墙,使得设备处于公网 Hanguard,app宽松的安全假设导致设备的隐私泄露(App作为设备的入口,厂商往往默认App所处的网络是可信的)

Mitigations

基于角色的访问控制

Stakeholders mobile的安全依赖user和vendor,user往往有权限控制的权利,同时user应该遵守从app store上下载app。vendor应该解决programming error并且安全存储数据。

Cloud Endpoint Attack Vectors August Smart Lock,cloud端实现的不安全的API导致越权 cloud没有对固件的更新包签名 web的xss漏洞,username枚举。。 AutoForge,伪造app的请求,实现爆破密码,token劫持等 Mitigations 身份认证 细粒度的访问控制 Stakeholders 由于云平台一般只有厂商管理,所以cloud上的基础设施和API实现的安全应该由他们来负责。 Communication

classes of protocols * Internet protocol * low-energy protocol

* Zigbee * Z-Wave * Bluetooth-LE

Application layer protocols,DNS、HTTP、UPnP、NTP

Attack Vectors

EDNS解析 导致信息泄露

用NTP的MITM攻击绕过HSTS UPnP实现时缺少认证,内存破坏漏洞等问题 TLS/SSL, TLS 1.0的IV问题,TLS RC4的问题 BLE、Zigbee、Z-Wave,协议设计本身的问题

LE的重放攻击更容易

Mitigations 对于HTTP,UPnP,DNS和NTP协议,放弃使用不安全的协议,使用最新的协议。 为有实现缺陷的TLS/SSL,升级服务器端和客户端库到最新版本应解决漏洞。对于基于LE的通信,第一代Zigbee和Z-Wave协议有严重的缺陷,并且缓解方案有限。供应商可以禁用这些协议。最近也有研究者发现通过监控物联网设备的流量,可以侧信道出一些隐私数据。 Apthorpe 等设计了如何在家中构造流量网络来防止旁道攻击。

Stakeholders 互联网服务提供商(ISP)可以看到基于IP的协议的数据包,但它们不是负责任何缓解。 对于ISP来说,他们必须提供其相应的义务(这个我理解是比如说Mira DDoS,ISP虽然不能阻止设备发出去的恶意流量,但是他可以ban掉设备访问C&C域名)。对于LE协议,供应商可以缓解禁用易受攻击的设备的配对。

评估

作者对45款比较流行的不同的设备进行了各方面的评估。这些设备主要包括

appliances cameras home assistants home automation media network devices

实验配置的网络环境,包含一个linux machine用于监听所有的流量和一个路由器(包含Wi-Fi热点)。对流量抓包后分析,对device和cloud使用漏扫分析,对app使用自动化审计工具。这里存在几个难点,

设备自动更新 手动关掉 云平台的分类 人工识别,排除CDN 无线流量分析 Wireless to wireless, iOS应用解密 砸壳
SoK: Security Evaluation of Home-Based IoT Deployments
MobSF(Mobile Security Framework)、Qark,Kryptowire这些针对app的漏洞扫描器。45个设备有42个有app,其中包含41个Android平台,42个iOS平台。24个Over-privileged。15个包含硬编码的API key。17个使用了硬编码的key和IV。
SoK: Security Evaluation of Home-Based IoT Deployments

Nessus Scanner扫描。45个设备4000个域名。这些域名包括 * 基于云的服务。(950) * 第三方的服务。CDN(1287) * 混合,使用了AWS,Azure的服务的厂商(630) * 未知(1288)


SoK: Security Evaluation of Home-Based IoT Deployments

Nessus Scanner扫描,分析设备的操作系统,服务,漏洞等。在45个设备中发现了84个服务,39个有issue。这些服务主要是SSH,UPnP,HTTP,DNS,Telnet,RTSP。这些issues包括

错误配置的TLS/SSL, 比如自签名的证书、过期的证书、短的密钥。。 UPnp未授权访问
SoK: Security Evaluation of Home-Based IoT Deployments

Nessus Monitor,ntop-ng,Wireshark,sslsplit。用sslsplit做MITM。43个D-C,35个A-C,27个A-D(LAN)。IP 通信包括DNS(41)、HTTP(38)、UPnP(21)和私有的协议(5)。

MITM: D-C(4), A-C(2), A-D(20) Encryption: D-C(40), A-C(24), A-D(7)
SoK: Security Evaluation of Home-Based IoT Deployments

缓解措施

Device 通过安全信道更新并确保更新内容的完整性。设备在激活前可以检查配置是否正确并安全。设备应该保证只与验证过身份的设备交互。 Mobile 敏感信息,比如API key应该在安装的时候导出并秘密存起来。密码算法应该尽量使用成熟的第三方库实现。 Cloud 厂商应该尽量使用商业化的云平台。通过API管理endpoint的配置。不应该再支持不安全的协议。 Communication 验证endpoint的身份,防止中间人攻击。保护通信协议的完整性。

The View from KubeCon+CloudNativeCon Seattle

$
0
0

The View from KubeCon+CloudNativeCon Seattle
Containers and Kubernetes Become Enterprise Ready

In case there was any doubt about the direction containers and Kubernetes are going, KubeCon+CloudNativeCon 2018 in Seattle should have dispelled them. The path is clear technology is maturing and keeps adding more features that make it conducive to mission critical, enterprise applications. From the very first day the talk was about service meshes and network functions, logging and traceability, and storage and serverless compute. These are couplets that define the next generation of management, visibility, and core capabilities of a modern distributed application. On top of that is emerging security projects such as SPIFFE & SPIRE, TUF, Falco, and Notary. Management, visibility, growth in core functionality, and security. All of these are critical to making container platforms enterprise ready.

If the scope of KubeCon+CloudNativeCon and the Cloud Native Computing Foundation (CNCF) is any indication, the ecosystem is also growing. This year there were 8000 people at the conference a sellout. The CNCF has grown to 300+ vendor members there are 46,000 contributors to its projects. That’s a lot of growth compared to just a few years ago. This many people don’t flock to sinking projects.

Despite all the growth in ecosystem and capabilities, there were still a fair number of container-curious people who were at KubeCon+CloudNativeCon. Their companies sent them to KubeCon+CloudNativeCon because they just beginning to explore containers and Kubernetes. They had a lot of questions about the viability of containers and microservices in their more demanding environments, especially regulated environments. Many of the questions I was asked, especially about visibility within clusters and security, were important discussion points. Some of the doubts were a smokescreen for organizations that resist change. It was obvious that they were looking for an excuse to stick with old ideas.

Another issue holding back container architectures is confusion in the market. CloudFoundry, serverless platforms, and Kubernetes platforms overlap and use similar technology, namely containers. Since vendors will often present these as competing platforms, depending on what they sell, it presents the market as more fragmented then it is. Even within technologies there is a lot of confusion. Take serverless computing. Ask ten people what serverless is and you will get eleven different responses. Some vendors want to make it a marketing label they can slap onto anything to make it shiny and new. This makes life very confusing for an enterprise IT professional trying to design next generation applications.

Some of this confusion is just an artifact of a lifecycle problem. Five years ago, there were several competing container formats from Docker, Rancher, CoreOs and others. That has changed. Containers have coalesced around a common image format. Container engine vendors are no longer competing on the basics but on performance and security layered over standard runtimes such as containerd.

No one is advocating change for the sake of change. We are at a point, however, where the demands of modern applications require a new architecture. Kubernetes represents an excellent platform for highly distributed applications where portability, performance, and development lifecycle problems are easily managed. The future of containers and Kubernetes as the base of the new stack was on display at KubeCon+CloudNativeCon and it’s a bright one. Expect to see more enterprise applications that rely on rigorous architectures to be Kubernetes.

A Post-Compliant World? Part 2

$
0
0
Introduction

Do we still have infosec compliance? Is the concept of upholding data and computer security outmoded?

I showed in my previous piece how early attempts at compliance were based on pre-computer principles of locks and keys, until organizations realized that model no longer fit. The new technology evolved so quickly that it became futile to look back to traditional ways for security solutions.

Being in infosec compliance is frustrating. We want to protect, not restrict. If you’re a compliance manager, you’ll be familiar with the positive arguments we put forward about how compliance enables business, how it inoculates against legal pitfalls and how it can enhance an organization’s reputation (so important for market competition). In spite of all this, security really is an inhibitor. In this era of technological breakthrough and pressure to innovate, compliance can seem like a ball and chain to technologists. To them, our pitches must sound like claiming seatbelts enhance driving.

What, then, is the modern argument for infosec compliance? From a compliance manager’s viewpoint, batting for it can seem to be a series of long innings, with computer innovation having an impressive variety of pitches.

On the other hand, most technology innovators will not openly oppose security any more than car manufacturers oppose better car safety. Through news headlines and personal experience, they too must be aware of the cost of security breaches, and how commonplace errors can lead to any size of business and any individual getting hurt. Quite reasonably, they will still want to see security controls eased (they won’t say “weakened”). They do want quicker uptake of innovation, especially where it gives advantages (however fleeting) through new ways of working and, of course, to profit margins.

The arguments for security are also frequently undermined by the natural drive for ease of access to data, i.e., ever more convenient availability . Trusted government services even rely on ease of access to meet promises of cheapness and reliability.

Compliance Lives!

Let’s be optimistic and consider the most practical arguments for why compliance should survive in some form or another. Over the past few years, the number of people with computers (and even without) who have been affected by hacking, including its weaponization by criminals to extract money, has exploded. In 2016, 40% of millennials had already experienced cybercrime.The numbers are certain to increase, with 75% of the world’s population expected to have some form of digital access by 2022.

The ways in which people might be hurt in future will change too, as criminals (and, alarmingly, foreign powers) adopt new exploits for new technologies. Consider how our reliance upon Internet technology is increasing daily through the Internet of Things (IoT). And those “things” include ever more personal and domestic services, all very vulnerable to exploitation.

Even if that seems too speculative, we do know for certain that our critical national infrastructure has been targeted for attack for some time,and more recently, even our democratic institutions.

Big Brother/Sister

Technological innovation has become almost entirely private-sector-led since World War 2, but the wartime disciplines and traditions around security did not transfer from that age. Though this is a good thing for both democracy and progress, it also meant that the public sector and military bureaucrats, who were then entirely responsible for security, were left behind while innovation continued to multiply and accelerate.

Nowadays there are few government services that can claim to have better security than those provided by the private sector. And governments increasingly rely on the private sector to supply trusted government services to public, as a way of driving down direct costs to the public finances.

Perhaps unrestricted technology growth will lead to some future tipping point, when a critical mass of people is hurt so badly through cyber-theft of their money, goods and information that they demand security safeguards over technological advances. We have seen no sign of this yet: it certainly does not appear on any current election agenda. People still generally want their government(s) to make the ultimate standards of rules for society, so long as those rules do not stop them going about their peaceful business.

I was a latter-day security bureaucrat. In the three decades I have overseen security, the biggest challenge was making meaningful rules and regulations for innovations that had already galloped through to the next innovation, sometimes before any new rules could be tested. In short, government-centered security compliance simply cannot keep up with security changes.

Trouble With Laws

A big drawback of government controls is the slow process of lawmaking. Elected governments cannot control technological change, yet they can be pressured by electors to “do something” when the technology starts to hurt. But laws need consensus, which can easily be disrupted by the short life cycle of elected governments and their fickle agendas. Drafting new laws also needs expertise and funding, and the target of legislation can change quickly as new technologies create new security exploits.

An example is the UK’s Computer Misuse Act , drafted in 1990 to bridge a hole in UK law that had allowed shoulder-surfing hackers to escape prosecution. That law has had to be continuously amended to keep up with post-1990 exploits like DDoS attacks.But the continuation of such old-fashioned terminology in a title of law is a direct commentary on the inability of government to keep up with infosec.

Since the 1990s we have seen a number of significant new laws which, though not centered on computing, have affected it through regulation of data collection and management. For example, the U.S. HIPAA (Health Insurance Portability and Accountability Act) has had a significant effect on how patient data is handled, while the GLBA (Gramm-Leach-Bliley Act) puts legal constraints in how institutions can share information they hold about individuals. The regulations that underpin these laws have created small islands of good infosec compliance, upon which other infosec best practice can take root and grow. However, being based on a variety of laws, these infosec-backing regulations are not connected and are always at risk of being undermined by the repeal of legislation. Consider, for instance, whether an international banking organization would have better or worse infosec compliance if the Sarbanes-Oxley regulations were withdrawn.

More recently, the growth of connectivity across national boundaries has been a challenge to governments obligated to guarantee the privacy of their citizens. We have seen the first major attempt by the U.S. to accommodate data protection legislation (the GDPR) now enforced in the EU. This is a new area, where compliance is mandated for non-EU-based companies who handle data belonging to EU citizens.

Standards and Best Practice

Non-legally-based standards (e.g., ISO 27001 and PCI-DSS ) support infosec compliance. They are more elastic than laws and regulations and able to grow alongside technology to allow for business innovation. They aren’t based on short-term government agendas and usually hold the promise (for organizations) of enhanced trust and therefore more business. These formats can require much effort to initiate and maintain through ongoing compliance checking and maintenance, sometimes by third-party assurance certifications.

Generally, organizations need some incentive to voluntarily increase their infosec compliance, and the promise of better security management measures, though very useful for infosec compliance monitoring, won’t do this.Some U.S. states have even attempted to integrate infosec standards into their laws, but this creates a legislative problem for the lawmakers of those states whenever the regulations need to be changed.

Maturity versus Compliance

The 2014 issue of Presidential Executive Order 13636 (Improving Critical Infrastructure) and the introduction of the Cybersecurity Framework marked a shift from traditional compliance towards the assessment of maturity levels for security. With an emphasis on critical infrastructure, the Cybersecurity Framework is a recognition that former expectations of full security compliance are unrealistic, and that organizations should seek well-developed security systems that are responsive to a wide range of security issues that is, are “mature.” Organizations can now use a variety of assessment tools to calculate their security maturity and focus upon increasing their resilience as part of a managed program of improvement.

At present, security compliance rests on multi-faceted approaches: A company that handles medical matters may base its compliance upon mandatory (i.e., HIPAA) requirements while reinforcing these through best practice standards such as ISO 27001. Where tools are used to help combine and maintain these efforts (as with the Cybersecurity Framework), it should be fair practice for such an organization to claim they are an infosec-responsible organization, even with the expectation that some security events will occasionally get through its defenses.

Awareness Is Key

The inevitability of security events makes effective infosec awareness programs ever more important. Where automation and policies fail, we have to rely on the human factor as a serious defense.

The effectiveness of an infosec awareness program is now an even more crucial part of any compliance program. Well-managed infosec awareness and compliance materials will support this. With the increased emphasis on maturity, such programs must be innovative, flexible and able to assess user responses to security issues. They also need to underscore relevant legal concerns. As technology is personalized, miniaturized and domesticated, infosec awareness and user responsibility must surely grow.

In my next piece, I’ll look at likely future trends for compliance. I’ll consider the continued drift from corporate computing and office-based technology towards cloud-based data retrieval and the blurring of lines between corporate and personal computing.

Footnotes

Source: 2016 Norton Cyber Security Insights report

Morgan, S. Security Ventures/Herjavec 2017 Cybercrime Cyber Report. Morgan also asserts that 3.8 billion of the world’s population had Internet access in 2017 and he projects 6 billion (75% of that future world population) will have it by 2022.

For example, See US-CERT Alert TA18-074A: ‘Russian Government Cyber Activity Targeting Energy and Other Critical Infrastructure Sectors’ (March 2018).

See ICA: “Assessing Russian Activities and Intentions in Recent US Elections” (January 2017)

E.g., through (2011) Executive Order 13571 on Streamlining Service Delivery and Improving Customer Service

See Hughes, M. The Computer Misuse Act: The Law That Criminalizes Hacking in the UK (05/2015)

See PC magazine article What Americans Need to Know About GDPR (Rist/Martinez May 25, 2018) for an excellent briefing.

It’s impossible to list the number of services worldwide that could adopt ISO27001. However, for the sake of reference, around 39,500 certifications existed in 2017. Source: ISO Survey 2017.

Coincheck 获日金融厅批准消息遭“打脸”,黑客案后的交易所还能翻身吗?

$
0
0

这个四面楚歌的交易所会赢得 FSA 的信任吗?


Coincheck 获日金融厅批准消息遭“打脸”,黑客案后的交易所还能翻身吗?

本周三,英文商业期刊《日经亚洲评论》发表文章,声称日本 Coincheck 公司将于本月获得该国金融监管机构的批准,成为一家获得交易许可的加密货币交易所。

文章称,日本金融厅(FSA)认定,Coincheck 有资格获得在日本经营加密货币交易所的许可证,因为继今年 4 月,在线经纪公司 Monex Group 收购 Coincheck 之后,Coincheck 对 “客户保护和其他系统” 做出了改进。

但这一消息在同一天,被 Coincheck 母公司 Monex 集团发声明驳斥。

公开信件称:“Coincheck 公司正在接受加密货币交易的审查。 但是,没有任何关于已经确定的登记事实。 未来如果有关 Coincheck 公司的事实需要披露,我们将及时、恰当地予以披露。”

黑客盗窃案主角 Coincheck

作为曾经日本最大的交易所,Coincheck 一度风光无限。

2017 年 4 月到 2018 年 1 月的 10 个月内,Coincheck 营收高达 532 亿日元(约合 4.9 亿美元),一度比肩日本最大的股票和衍生品交易所运营商――日本交易所集团。

但在今年 1 月,因遭到黑客攻击,5.26 亿 XEM(约 4 亿美元)被盗,Coincheck 跌落神坛,成为当时数字货币历史上最大盗窃案主角。

黑客攻击导致 Coincheck 暂停了一系列服务,包括“提取任何资金,包括以日元计价的资金; 出售和购买除比特币以外的所有加密货币; 以及某些信用卡和便利店支付服务。”

其引起更大的蝴蝶效应是,黑客事件之后,日本金融厅 FSA 罕见发出警告,要求日本所有数字货币交易所重新检查系统安全性,并即时向监管部门上报危害安全事情,与此同时,FSA也暂停了所有加密货币交易所许可申请,今年 9 月份才开始着手处理积压的大量申请。另外,日本加密货币硬钱包的订单量也随之骤增。

不过,也就是黑客攻击发生两天后,Coincheck 发布了 "补偿政策",旨在补偿大约 26 万受黑客攻击影响的用户。 今年 3 月,市场观察 (Market Watch) 报告称,该公司已成功为客户退款,并为此支付了 463 亿日元(当时约合 4.35 亿美元)。

Monex 的补救 在一月份的黑客攻击之后,Coincheck 收到了来自 FSA 的两份业务改进指令,责令其改善其客户保护和反洗钱 (AML) 措施。

今年 4 月,交易所进一步决定改组其股东组成和管理,同意成为日本贵金属交易所 Monex 的全资子公司。收购完成后,将由 Monex 主导 Coincheck 的重建工作。Monex 的首席运营官 Toshihiko Katsuya 将出任 Coincheck 的总裁;Coincheck 原总裁和原首席运营官将立即卸任。

在新领导班子的带领下,Coincheck 进行了一系列的整改。

其中,在 6 月 18 日,为进一步减少洗钱等风险,Coincheck 停止办理 Monero、Zcash、Dash、Augur 四种匿名数字货币的相关业务。

10 月 30 日 Monex 发布通知,宣称在交易所技术安全得到 “外部专家” 确认后, Coincheck 将逐步重启其服务”。

根据该通知,10 月 30 日,使用 Coincheck 交易所的客户可以再次开立新账户,存款和购买 BTC、 ETC、 LTC 和 BCH,发送和出售可在该交易所交易的加密货币,以及存取日元。

该通知还指出,该交易所正努力恢复与 ETH、 XEM和XRP等 的存款和购买有关的服务,并通过 "快速存款" 方式在便利店存入日元。 在此期间,该交易所还致力于开放支付和附属服务,允许新的杠杆交易,并允许客户通过 Coincheck DENKI 支付虚拟货币的电费账单。

据日本 Monex 集团 CEO Matsumoto 曾在记者招待会上称,最大的目标仍是从日本金融厅获得业务登记。

这一目标并不简单,自 2017 年 4 月日本修订《支付服务法》(Payment Services Act)以来,所有在日本境内运营的加密交易所都必须获得许可证。但随着 FSA 在整个 2018 年不断提高对申请者的要求,截至发稿前,已有多达 200 家申请者仍在等待运营许可证的决定。

且自 Coincheck 整改以来,日本业界也一直存在其虽一直致力于改善交易环境的安全性,但显然改善速度仍然不够的声音。

此次疑似传来好消息却又遭否认,不禁让人思考,这个四面楚歌的交易所会赢得 FSA 的信任吗?


Cylance Adds Playbook-Driven Response to EDR Solution

$
0
0

Automated Processes and Procedures Ensure Consistent Incident Response Across the Enterprise

IRVINE, Calif. (BUSINESS WIRE) Cylance Inc. , the leading provider of AI-driven, prevention-first security solutions, today announced the availability of response playbooks for automated incident response as part of its leading endpoint detect and respond offering, CylanceOPTICS .

CylanceOPTICS customers around the world now benefit from the ability to set up consistent, multistep, automated responses or “playbooks” for immediate execution on an endpoint where a threat detection occurs. Playbook responses work from a set of AI-based rules that describe actions executed against input data and triggered by an event. Cylance playbooks include the effective replication of security analyst decision making with no cloud or human intervention required.

“A minor security event can turn into a widespread, uncontrolled security incident in a matter of milliseconds,” said Sasi Murthy , vice president of product marketing at Cylance. “By turning every endpoint into a miniature security operations center, we provide organizations the ability to instantly detect and respond to threats locally without having to send data to the cloud, which saves valuable time and reduces the risk of a damaging―and very public―compromise.”

CylanceOPTICS exposes field-tested artificial intelligence to detect and prevent advanced threats, enabling organizations to use automated analyses to disrupt attackers across their environments. It also builds the policies for device control and memory exploitation protection that prevent attacks from executing in the network. By creating automated playbooks within CylanceOPTICS, organizations can be confident that appropriate and strategic responses will be taken, regardless of who is staffing the security environment.

One of the biggest challenges security teams face today is the widening global cybersecurity skills shortage, with some forecasts estimating shortfall of some two million positions in 2019. Response playbooks expand the capabilities of Cylance’s next-generation AI platform by enabling automated incident response, freeing up analysts for higher-value tasks without an increase in headcount or process complexity.

“Hospitals and clinics have become popular targets for cyber threat actors, who understand the critical value of clinical data and operational systems and devices in the healthcare industry,” said Eric Cornelius, chief product officer at Cylance. “The ability to set up response playbooks with CylanceOPTICS not only provides security analysts peace of mind, it also ensures incidents are contained immediately on the endpoint without compromising the network hospital staff and patients rely on.”

CylanceOPTICS users can now create up to 100 playbooks to execute tasks automatically on endpoints when a detection rule (whether static, machine-learned, or custom) is triggered. Playbooks can be set up to execute both OPTICS and third-party product responses, such as forensic analysis, memory capture, and IT ticketing. These automated responses eliminate the execution latency that can cause minor security events to balloon into major, business-crippling security incidents. To learn more about Cylance response playbooks, visit https://www.cylance.com/en-us/platform/products/cylance-optics.html .

About Cylance Inc.

Cylance develops artificial intelligence to deliver prevention-first, predictive security products and smart, simple, secure solutions that change how organizations approach endpoint security. Cylance provides full spectrum predictive threat prevention and visibility across the enterprise to combat the most notorious and advanced cybersecurity attacks. With AI-based malware prevention, threat hunting, automated detection and response, and expert security services, Cylance protects the endpoint without increasing staff workload or costs. We call it the Science of Safe. Learn more at www.cylance.com .

Contacts

KC Higgins

Cylance Media Relations

+1 303.434.8163

khiggins@cylance.com
Cylance Adds Playbook-Driven Response to EDR Solution
Do you think you can beat this Sweet post? If so, you may have what it takes to become a Sweetcode contributor...Learn More.

Digital Risk Management: A Working Definition

$
0
0
Introduction

We all live in a rapidly digitizing world the computing power of your phone in your pocket exceeds the world’s supercomputers just a few decades ago. We have all seen the exponential growth and adoption to digital products and technologies. It is the breakneck speed with which these technologies have been produced and adopted by consumers and organizations alike that has made the concept of “digital risk management” so hard to define.

We are in the midst of an industrial revolution, the fourth specifically. During each of these benchmark events in history, industry collectively was irrevocably changed. From the advent of mass production in the 1700s to the advent of the assembly line in the early and mid-1800s, and more recently the transformation of communication with the internet. The transformation we are going through now, though, is something completely different.

Building upon the development of the modern computer and the adoption of the internet, the lines between the digital and physical world are becoming increasingly blurry. In fact, it is predicted that 2019 will be the year that the impact of cyberattacks make it to the physical world .

Defining the terms

As with any tectonic shift that impacts millions, the terminology that we use is still quite disparate. Actually, it is this lack of a common language that stands in the way of many technical leaders working to get buy-in from their executives.

Information security:For our purposes, information security is the umbrella term for all activities performed by the CIO and CISO to ensure that their organization stays secure. Information security spans both the physical and digital world.

Integrated risk management:As seen with the Gartner Magic Quadrant of 2018, integrated risk management is the natural progression of GRC. Where GRC was capable of managing and mitigating risks in the physical world, a fragmented approach cannot succeed in the digital world. Integrated risk management provides a single-pane-of-glass necessary for information security leaders to see a holistic view of their environments and perform the continuous compliance necessary to secure a digital organization.

Digitization:Many organizations have realized that in order to ensure success, they must embrace new technologies. Gartner has broken these technologies into cloud, mobile, social, big data, third-party technology providers, OT and the IoT.

Digital risk management:A fact of digitization, digital risk management is the role that CISOs play in adopting these digital technologies. Digital risk management and cybersecurity in most cases is seen as interchangeable.

Why Digital Risk Management is a fundamental change to risk and compliance management

In order to understand why digital risk management is so ambiguous and misunderstood, we must look at the way security teams approached risk and compliance before the fourth industrial revolution.

Under a checkbox compliance approach, and using a GRC tool, security teams would perform scheduled assessments and have to assemble the necessary information each time. For many, this was and is still done in static spreadsheets. The information stored in those spreadsheets was and is outdated as soon as the team hits save. It is a static snapshot of a dynamic environment.

This process is predicated on the notion that the adoption of new tools, addition of new vendors, and implementation of new technologies was slow and in the past it was. Organizations could not procure and implement a new tool in hours, it took months. Price points were also a limiting factor in the past new tools and technologies had astronomical price tags that needed board approval before they could move forward. Today the adoption and implementation of new technologies is blistering. Every business unit within an organization is adding new tools to their productivity stack and implementing them faster than ever. The price of powerful solutions has dropped and as a result is more discretionary to the managers and directors of those units. The technology adoption process is no longer slow enough for GRC to keep up.

Risk and compliance managers need two things to keep their organization secure in a digital world: a risk-aware culture that scales beyond just their own business unit, and tools flexible and smart enough to manage and scale as the organization adopts new technologies.

Don’t let manual effort slow down your digital transformation

As we’ve written about before , the CISO and the CEO must be a collaborative team to ensure business growth while staying secure. It is in cases of digitization and digital risk management that the CISO is at the greatest risk of appearing to be a hindrance rather than an enabler of growth.

Without the proper solution to enable the CISO to manage the compliance and risks of a digital organization, they will be hard-pressed to do their job. Given that digital technologies are dynamic, static tools used to assess the risk will leave an organization open to more and more threats.

Today, it is either keep your spreadsheets and slow (or even stop) your organization’s digital transformation, or adopt a powerful new solution of your own (every other business unit gets to, why not security?) and become an ally to the CEO and empower your digitization.

Introduction

We all live in a rapidly digitizing world the computing power of your phone in your pocket exceeds the world’s supercomputers just a few decades ago. We have all seen the exponential growth and adoption to digital products and technologies. It is the breakneck speed with which these technologies have been produced and adopted by consumers and organizations alike that has made the concept of “digital risk management” so hard to define.

We are in the midst of an industrial revolution, the fourth specifically. During each of these benchmark events in history, industry collectively was irrevocably changed. From the advent of mass production in the 1700s to the advent of the assembly line in the early and mid-1800s, and more recently the transformation of communication with the internet. The transformation we are going through now, though, is something completely different.

Building upon the development of the modern computer and the adoption of the internet, the lines between the digital and physical world are becoming increasingly blurry. In fact, it is predicted that 2019 will be the year that the impact of cyberattacks make it to the physical world .

Defining the terms

As with any tectonic shift that impacts millions, the terminology that we use is still quite disparate. Actually, it is this lack of a common language that stands in the way of many technical leaders working to get buy-in from their executives.

Information security:For our purposes, information security is the umbrella term for all activities performed by the CIO and CISO to ensure that their organization stays secure. Information security spans both the physical and digital world.

Integrated risk management:As seen with the Gartner Magic Quadrant of 2018, integrated risk management is the natural progression of GRC. Where GRC was capable of managing and mitigating risks in the physical world, a fragmented approach cannot succeed in the digital world. Integrated risk management provides a single-pane-of-glass necessary for information security leaders to see a holistic view of their environments and perform the continuous compliance necessary to secure a digital organization.

Digitization:Many organizations have realized that in order to ensure success, they must embrace new technologies. Gartner has broken these technologies into cloud, mobile, social, big data, third-party technology providers, OT and the IoT.

Digital risk management:A fact of digitization, digital risk management is the role that CISOs play in adopting these digital technologies. Digital risk management and cybersecurity in most cases is seen as interchangeable.

Why Digital Risk Management is a fundamental change to risk and compliance management

In order to understand why digital risk management is so ambiguous and misunderstood, we must look at the way security teams approached risk and compliance before the fourth industrial revolution.

Under a checkbox compliance approach, and using a GRC tool, security teams would perform scheduled assessments and have to assemble the necessary information each time. For many, this was and is still done in static spreadsheets. The information stored in those spreadsheets was and is outdated as soon as the team hits save. It is a static snapshot of a dynamic environment.

This process is predicated on the notion that the adoption of new tools, addition of new vendors, and implementation of new technologies was slow and in the past it was. Organizations could not procure and implement a new tool in hours, it took months. Price points were also a limiting factor in the past new tools and technologies had astronomical price tags that needed board approval before they could move forward. Today the adoption and implementation of new technologies is blistering. Every business unit within an organization is adding new tools to their productivity stack and implementing them faster than ever. The price of powerful solutions has dropped and as a result is more discretionary to the managers and directors of those units. The technology adoption process is no longer slow enough for GRC to keep up.

Risk and compliance managers need two things to keep their organization secure in a digital world: a risk-aware culture that scales beyond just their own business unit, and tools flexible and smart enough to manage and scale as the organization adopts new technologies.

Don’t let manual effort slow down your digital transformation

As we’ve written about before , the CISO and the CEO must be a collaborative team to ensure business growth while staying secure. It is in cases of digitization and digital risk management that the CISO is at the greatest risk of appearing to be a hindrance rather than an enabler of growth.

Without the proper solution to enable the CISO to manage the compliance and risks of a digital organization, they will be hard-pressed to do their job. Given that digital technologies are dynamic, static tools used to assess the risk will leave an organization open to more and more threats.

Today, it is either keep your spreadsheets and slow (or even stop) your organization’s digital transformation, or adopt a powerful new solution of your own (every other business unit gets to, why not security?) and become an ally to the CEO and empower your digitization.

2018 Bug Bounty Year in Review

$
0
0

With 2018 coming to a close, we thought it a good opportunity to once again reflect on our Bug Bounty program. At Shopify, our bounty program complements our security strategy and allows us to leverage a community of thousands of researchers who help secure our platform and create a better Shopify user experience. This was the fifth year we operated a bug bounty program, the third on HackerOne and our most successful to date ( you can read about last year’s results here ). We reduced our time to triage by days, got hackers paid quicker, worked with HackerOne to host the most innovative live hacking event to date and continued contributing disclosed reports for the bug bounty community to learn from.

Our Triage Process

In 2017, our average time to triage was four days. In 2018, we shaved that down to 10 hours, despite largely receiving the same volume of reports. This reduction was driven by our core program commitment to speed. With 14 members on the Application Security team, we're able to dedicate one team member a week to HackerOne triage.

When someone is the dedicated “triager” for the week at Shopify, that becomes their primary responsibility with other projects becoming secondary. Their job is to ensure we quickly review and respond to reports during regular business hours. However, having adedicated triager doesn't preclude others from watching the queue and picking up a report.

When we receive reports that aren't N/A or Spam, we validate before triaging and open an issue internally since we pay $500 when reports are triaged on HackerOne. We self-assign reports on the HackerOne platform so other team members know the report is being worked on. The actual validation process we use depends on the severity of the issue:

Critical : We replicate the behavior and confirm the vulnerability, page the on-call team responsible and triage the report on HackerOne. This means the on-call team will be notified immediately of the bug and Shopify works to address it as soon as possible. High : We replicate the behavior and ping the development team responsible. This is less intrusive than paging but still a priority. Collaboratively, we review the code for the issue to confirm it's new and triage the report on HackerOne. Medium and Low : We’ll either replicate the behavior and review the code, or just review the code, to confirm the issue. Next, we review open issues and pull requests to ensure the bug isn't a known issue. If there are clear security implications, we'll open an issue internally and triage the report on HackerOne. If the security implications aren't clear, we'll err on the side of caution and discuss with the responsible team to get their input about whether we should triage the report on HackerOne.

This approach allows us to quickly act on reports and mitigate critical and high impact reports within hours. Medium and Low reports can take a little longer, especially where the security implications aren't clear. Development teams are responsible for prioritizing fixes for Medium and Low reports within their existing workloads, though we occasionally check in and help out.

H1-514
2018 Bug Bounty Year in Review
H1-514 in Montreal

In October, we hosted our second live hacking event and it was the first hacking event in our office in Montreal, Quebec, H1-514. We welcomed over 40 hackers to our office to test our systems. To build on our program's core principles of responsiveness, transparency and timely payouts, we wanted to do things differently than other HackerOne live hacking events. As such, we worked with HackerOne to do a few firsts for live hacking events:

While other events opened submissions the morning of the event, we opened submissions when the target was announced to be able to pay hackers as soon as the event started and avoid a flood of reports We disclosed resolved reports to participants during the event to spark creativity instead of leaving this to the end of the event when hacking was finished We used innovative bonuses to reward creative thinking and hard work from hackers testing systems that are very important to Shopify (e.g. GraphQL, race conditions, oldest bug, regression bonuses, etc.) instead of awarding more money for the number of bugs people found We gave hackers shell access to our infrastructure and asked them to report any bugs they found. While none were reported at the event, the experience and feedback informed a continued Shopify infrastructure bounty program and the Kubernetes product security team's exploration of their own bounty program.
2018 Bug Bounty Year in Review
H1-514 in Montreal

When we signed on to host H1-514, we weren't sure what value we'd get in return since we run an open bounty program with competitive bounties. However, the hackers didn't disappoint and we received over 50 valid vulnerability reports, a few of which were critical. Reflecting on this, the success can be attributed to a few factors:

We ship code all the time. Our platform is constantly evolving so there's always something new to test; it's just a matter of knowing how to incentivize the effort for hackers (You can check the Product Updates and Shopify News blogs if you want to see our latest updates). There were new public disclosures affecting software we use. For example, Tavis Ormandy's disclosure of Ghostscript remote code execution in Imagemagick, which was used in a report during the event by hacker Frans Rosen. Using bonuses to incentivize hackers to explore the more complex and challenging areas of the bounty program. Bonuses included GraphQL bugs, race conditions and the oldest bug, to name a few. Accepting submissions early allowed us to keep hackers focused on eligible vulnerability types and avoid them spending time on bugs that wouldn't be rewarded. This helped us manage expectations throughout the two weeks, keep hackers engaged and make sure everyone was using their time effectively. We increased our scope. We wanted to see what hackers could do if we added all of our properties into the scope of the bounty program and whether they'd flock to new applications looking for easier-to-find bugs. However, despite the expanded scope, we still received a good number of reports targeting mature applications from our public program.
2018 Bug Bounty Year in Review
H1-514 in Montreal. Photo courtesy of HackerOne Stats (as of Dec 6, 2018)

2018 was the most successful year to date for our bounty program. Not including the stats from H1-514, we saw our average bounty increase again, this time to $1,790 from $1,100 in 2017. The total amount paid to hackers was also up $90,200 compared to the previous year, to $155,750 with 60% of all resolved reports having received a bounty. We also went from one five-figure bounty awarded in 2017, to five in 2018 marked by the spikes in the following graph.


2018 Bug Bounty Year in Review
Bounty Payouts by Date As mentioned, the team committed to q

$10,000 research fellowships for underrepresented talent

$
0
0

The Trail of Bits SummerCon Fellowship program is now accepting applications from emerging security researchers with excellent project ideas. Fellows will explore their research topics with our guidance and then present their findings at SummerCon 2019 . We will be reserving at least 50% of our funding for marginalized, female-identifying, transgender, and non-binary candidates. If you’re interested in applying, read on!

Why we’re doing this

Inclusion is a serious and persistent issue for the infosec industry. According to the 2017 (ISC) 2 report on Women in Cybersecurity , only 11% of the cybersecurity workforce identify as women -a deficient proportion that hasn’t changed since 2013. Based on a 2018 (ISC) 2 study , the issue is worse for women of color, who report facing pervasive discrimination, unexplained denial or delay in career advancement, exaggerated highlights of mistakes and errors, and tokenism.

Not only is this ethically objectionable, it makes no business sense. In 2012, Mckinsey & Company found -with ‘startling consistency’―that “for companies ranking in the top quartile of executive-board diversity, Returns on Equity (ROE) were 53 percent higher, on average, than they were for those in the bottom quartile. At the same time, Earnings Before Tax and Interest (EBTI) margins at the most diverse companies were 14 percent higher, on average, than those of the least diverse companies.”

The problem is particularly conspicuous at infosec conferences: a dearth of non-white non-male speakers, few female attendees, and pervasive reports of sexual discrimination. That’s why Trail of Bits and one of the longest-running hacker conferences, SummerCon, decided to collaborate to combat the issue. Through this fellowship, we’re sponsoring and mentoring emerging talent that might not otherwise get enough funding, mentorship, and exposure, and then shining a spotlight on their research.

Funding and mentorship to elevate your security research

The Trail of Bits SummerCon Fellowship provides awarded fellows with:

$10,000 grant to fund a six-month security research project Dedicated research mentorship from a security engineer at Trail of Bits An invitation to present findings at SummerCon 2019

50% of the program spots are reserved for marginalized, people of color, female-identifying, transgender, and non-binary candidates. Applicants of all genders, races, ethnicities, sexual orientations, ages, and abilities are encouraged to apply.

The research topics we’ll support

Applicants should bring a low-level programming or security research project that they’ve been wanting to tackle but have lacked the time or resources to pursue. They’ll have strong skills in low-level or systems programming, reverse engineering, program analysis (including dynamic binary instrumentation, symbolic execution, and abstract interpretation), or vulnerability analysis.

We’re especially interested in research ideas that align with our areas of expertise. That way, we can better support applicants. Think along the lines of:

Binary analysis Static/dynamic analysis techniques Blockchain and smart contract security Cryptography LLVM engineering Software verification How do I apply?

Apply here!

We’re accepting applications until January 15th. We’ll announce fellowship recipients in February.

Interested in applying? Go for it!

Submissions will be judged by a panel of experts from the SummerCon foundation, including Trail of Bits. Good luck!

Elasticsearch 核心插件Kibana 本地文件包含漏洞分析(CVE-2018-17246)

$
0
0

Elasticsearch 核心插件Kibana 本地文件包含漏洞分析(CVE-2018-17246)

作者:Ivan1ee@360云影实验室

不久前Elasticsearch发布了最新安全公告, Elasticsearch Kibana 6.4.3之前版本和5.6.13之前版本中的Console插件存在严重的本地文件包含漏洞可导致拒绝服务攻击、任意文件读取攻击、配合第三方应用反弹SHELL攻击,下文笔者对其漏洞背景、攻击原理和行为进行分析和复现。

0X01 影响范围

Elasticsearch Kibana是荷兰Elasticsearch公司的一套开源的、基于浏览器的分析和搜索Elasticsearch仪表板工具,作为Elasticsearch的核心组件,Kibana可作为产品或服务提供,并与各种系统,产品,网站和企业中的其他Elastic Stack产品配合使用。 由于Kibana在大数据领域用途较为广泛,此次漏洞影响范围较大, Shodan搜索结果如图


Elasticsearch 核心插件Kibana 本地文件包含漏洞分析(CVE-2018-17246)
0x02 漏洞场景

笔者选择Kibana-6.1.1-linux-x86_64.tar.gz版本,搭建过程不表,网上很多参考资料

2.1、拒绝服务

拒绝服务笔者选择/cli_plugin/index.js演示,攻击向量如下

/api/console/api_server?sense_version=%40%40SENSE_VERSION&apis= ../../../cli_plugin/index

GET请求发出去后客户端打不开应用页面,在服务端Kibana进程退出,应用服务挂掉具体看下图


Elasticsearch 核心插件Kibana 本地文件包含漏洞分析(CVE-2018-17246)
Elasticsearch 核心插件Kibana 本地文件包含漏洞分析(CVE-2018-17246)
2.2、任意文件读取

文件读取笔者选择/etc/passwd演示,攻击向量如下

/api/console/api_server?sense_version=%40%40SENSE_VERSION&apis=../../../../../../../../../../../etc/passwd

GET请求发出去后客户端页面会抛出500错误,在服务端会将读取到的passwd内容抛出来,具体看下图


Elasticsearch 核心插件Kibana 本地文件包含漏洞分析(CVE-2018-17246)
Elasticsearch 核心插件Kibana 本地文件包含漏洞分析(CVE-2018-17246)
2.3、配合第三方应用

通常情况下Kibana与其他的应用程序一起部署,如果应用程序可以上传或者写入javascript文件的话,攻击者可以通过Nodejs创建一个Reverse Shell,内容如下


Elasticsearch 核心插件Kibana 本地文件包含漏洞分析(CVE-2018-17246)

路径遍历允许攻击者访问Kibana服务器任何文件的位置,如下


Elasticsearch 核心插件Kibana 本地文件包含漏洞分析(CVE-2018-17246)

Nc反弹监听得到交互会话


Elasticsearch 核心插件Kibana 本地文件包含漏洞分析(CVE-2018-17246)
0X03 漏洞分析

漏洞污染点位于 \src\core_plugins\console\api_server\server.js


Elasticsearch 核心插件Kibana 本地文件包含漏洞分析(CVE-2018-17246)

Apis得到的值传递给赋值参数name,从图上也能看到name变量的内容没有进行任何过滤被引入到require,而require模块在Nodejs里表示加载模块的方式,可以加载核心模块,例如内置的“http”,也可以是包含名为“index.js”这样的文件或目录如果参数以“/”、“./”、”../”开头则函数知道该模块是文件或者文件夹,继续跟进到函数asJson所在的api.js文件中


Elasticsearch 核心插件Kibana 本地文件包含漏洞分析(CVE-2018-17246)

在同级目录下ES_5_0.js 中有一个这个类的导出实例


Elasticsearch 核心插件Kibana 本地文件包含漏洞分析(CVE-2018-17246)

总结一下此函数的正常流程是获取导出API类实例并调用函数asJson的JavaScript文件的名称,但是忽略了过滤验证因此我们可以指定任意文件,配合目录跳转遍历就可以实现Kibana服务器上任意文件读取的操作。基于上述的分析很明显Nodejs应用程序需要大量的文件,如果这些文件里包含了process.exit指令,那么就可能关闭Kibana进程并导致拒绝服务攻击,通过搜索找到了三个可能的攻击向量

引发DOS攻击的向量 ../../../cli_plugin/index.js ../../../cli_plugin/cli.js ../../../docs/cli.js 0x04 一点总结

LFI通常出现在php应用中,通样是require这次应用在Nodejs程序中,相信未来还会有更多的Nodejs程序存在这种问题,原因是本地包含漏洞出现了很多年,但依旧有很多软件开发人员和架构师没有考虑到这点,这篇文章很好的说明了Kibana中存在的一个关键LFI漏洞,使得攻击者能够在服务器上运行本地代码,可造成直接的危害就是拒绝服务攻击,若在生产环境下业务实在伤不起,需要引起对Nodejs LFI的重视。

0x05 参考链接

https://github.com/appsecco/vulnerable-apps/tree/master/node-reverse-shell

https://www.elastic.co/downloads/kibana

http://www.cnvd.org.cn/flaw/show/CNVD-2018-23907

http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-17246

10种防止网络攻击的方法

$
0
0

随着威胁形势的不断发展,建立全面的网络安全解决方案需要外围安全性和主动的网内防御 。随着网络攻击的范围,规模和频率不断增加,网络卫生正变得越来越重要。

与个人卫生相似,网络卫生是指旨在帮助维护系统整体健康小型实践和习惯。通过养成良好的网络卫生习惯,您可以减少整体漏洞,使自己不易受到许多最常见的网络安全威胁的影响。


10种防止网络攻击的方法

这很重要,因为无论是作为个人还是组织的代表,用户最终都要承担一定的责任,确保他们的计算机和信息保持安全。以下是最终用户可以采取的10个简单的日常步骤,以便更好地保护自己(在许多情况下是他们的业务)免受网络攻击。

1. 首先,介绍基础知识

确保防火墙处于活动状态,配置正确,并且最好是下一代防火墙; 这是一个共同的责任。此外,请确保对您的IoT设备进行细分,并将它们放在自己的网络上,以免它们感染个人或商业设备。

安装防病毒软件(有许多备受推崇的免费选项,包括Avast,BitDefender,Malwarebytes, Microsoft windows Defender和Panda)

保持软件更新。更新包含重要更改,以提高计算机上运行的应用程序的性能,稳定性和安全性。安装它们可确保您的软件继续安全有效地运行。

不要仅仅依靠预防技术。确保您拥有准确的检测工具,以便快速通知您任何绕过外围防御的攻击。欺骗技术是推荐用于大中型企业的技术。不确定如何添加检测?看看托管服务提供商,他们可以提供帮助

2. 密码不会消失; 确保你的坚强

由于密码不太可能很快消失,因此个人应该采取一些措施来强化密码。例如,密码短语已经被证明更容易跟踪并且更难以破解。密码管理器(如LastPass,KeePass,1password和其他服务)也可用于跟踪密码并确保密码安全。

还可以考虑激活双因素身份验证(如果可用于银行,电子邮件和其他提供该身份验证的在线帐户)。有多种选择,其中许多是免费的或便宜的。

3. 确保您在安全的网站上

输入个人信息以完成金融交易时,请留意地址栏中的“https://”。HTTPS中的“S”代表“安全”,表示浏览器和网站之间的通信是加密的。

当网站得到适当保护时,大多数浏览器都会显示锁定图标或绿色地址栏。如果您使用的是不安全的网站,最好避免输入任何敏感信息。

采用安全的浏览实践。今天的大多数主要网络浏览器(如Chrome,Firefox和Safari)都包含一些合理的安全功能和有用的工具,但还有其他方法可以使您的浏览更加安全。经常清除缓存,避免在网站上存储密码,不要安装可疑的第三方浏览器扩展,定期更新浏览器以修补已知漏洞,并尽可能限制对个人信息的访问。

4. 加密敏感数据

无论是商业记录还是个人纳税申报表,加密最敏感的数据都是个好主意。加密可确保只有您或您提供密码的人才能访问您的文件。

5. 避免将未加密的个人或机密数据上传到在线文件共享服务

Google云端硬盘,Dropbox和其他文件共享服务非常方便,但它们代表威胁演员的另一个潜在攻击面。将数据上载到这些文件共享服务提供程序时,请在上载数据之前加密数据。

Google云端硬盘和Dropbox等云服务提供商提供了安全措施,但威胁参与者可能不需要入侵您的云存储以造成伤害。威胁参与者可能会通过弱密码,糟糕的访问管理,不安全的移动设备或其他方式访问您的文件。

6. 注意访问权限

了解谁可以访问哪些信息非常重要。例如,不在企业财务部门工作的员工不应该访问财务信息。对于人力资源部门以外的人事数据也是如此。

强烈建议不要使用通用密码进行帐户共享,并且系统和服务的访问权限应仅限于需要它们的用户,尤其是管理员级别的访问权限。例如,应该注意不要将公司计算机借给公司外的任何人。如果没有适当的访问控制,您和您公司的信息都很容易受到威胁。

7. 了解Wi-Fi的漏洞

不安全的Wi-Fi网络本身就很脆弱。确保您的家庭和办公室网络受密码保护并使用最佳可用协议进行加密。此外,请确保更改默认密码。

最好不要使用公共或不安全的Wi-Fi网络来开展任何金融业务。如果你想要格外小心,如果笔记本电脑上有任何敏感材料,最好不要连接它们。

使用公共Wi-Fi时,请使用VPN客户端,例如您的企业或VPN服务提供商提供的VPN客户端。

将物联网设备风险添加到您的家庭环境时,请注意这些风险。建议在自己的网络上进行细分。

8. 了解电子邮件的漏洞

小心通过电子邮件分享个人或财务信息。这包括信用卡号码(或CVV号码),社会安全号码以及其他机密或个人信息。想想Gmail如何预测您输入的内容。您输入的所有内容都可以阅读。

注意电子邮件诈骗。常见的策略包括拼写错误,创建虚假的电子邮件链,模仿公司高管等。这些电子邮件通常在仔细检查之前有效。除非您能够验证来源的有效性,否则永远不要相信要求您汇款或从事其他异常行为的电子邮件。

如果您要求同事进行购物,汇款或通过电子邮件付款,请提供密码密码。强烈建议使用电话或文本确认。

9. 避免在网站上存储您的信用卡详细信息

每次您想要购买时,可能更容易在网站或计算机上存储信用卡信息,但这是信用卡信息受损的最常见方式之一。

养成查看信用卡对帐单的习惯。在线存储您的信用卡详细信息是您的信息受到损害的一种方式。

10. 让IT快速拨号

如果发生违规行为,您应该了解您公司或您自己的个人事件响应计划。如果您认为自己的信息遭到入侵,并且可能包含公共关系团队的通知,这将包括了解您的IT或财务部门的联系人。如果您怀疑自己是犯罪或骗局的受害者,那么了解哪些执法部门可以对您有所帮助也是一个好主意。许多网络保险公司也需要立即通知。

在违规期间有很多事情需要处理。在违规期间了解您的事件响应计划并不是您最好的选择。建议熟悉该计划并进行实践,以便在事件发生时能够快速,自信地采取行动。这也包括个人响应计划。如果受到损害,您知道如何立即关闭信用卡或银行卡吗?

即使是世界上最好的网络安全也会得到知情和准备好的个人的支持。了解任何网络中存在的漏洞并采取必要的预防措施是保护自己免受网络攻击的重要的第一步,遵循这些简单的规则将改善您的网络卫生,并使您成为更准备,更好保护的互联网用户。

对CVE-2018-8587(Microsoft Outlook)漏洞的深入分析

$
0
0

对CVE-2018-8587(Microsoft Outlook)漏洞的深入分析
事件经过

前一段时间,Fortinet的FortiGuard实验室研究员Yonghui Han按照FortiGuard Labs的漏洞披露规则,向微软报告了Office Outlook中存在的一个堆溢出漏洞。12月11日微软宣布该漏洞已被修补,并发布了漏洞通告,该漏洞的CVE编号为CVE-2018-8587

Microsoft Outlook是Microsoft Office套件的组件之一,广泛用于发送和接收电子邮件,管理联系人,记录和跟踪日程安排以及执行其他任务。Yonghui Han在windows上运行的多个版本的Outlook中发现了堆溢出漏洞,该漏洞涵盖从Outlook 2010到最新的Outlook 2019以及Office 365 ProPlus的所有32/64位版本的软件。该漏洞通过构造格式错误的RWZ文件(邮件分类规则文件)触发,当Outlook接受到不正确的RWZ文件时,它分配的堆空间过少而且缺少合适的边界检查,导致堆溢出漏洞产生。

漏洞复现

复现流程:运行Microsoft Outlook,然后单击“规则=>管理规则和警报=>选项=>导入规则”,然后选择导致Outlook崩溃的PoC文件,接着进行漏洞的分析。


对CVE-2018-8587(Microsoft Outlook)漏洞的深入分析

以下是发生崩溃时的调用堆栈:


对CVE-2018-8587(Microsoft Outlook)漏洞的深入分析

可以看到,崩溃发生在堆块被释放时。由于我们现在无法确认被释放的堆块有什么问题,所以我们通过启用Full Page Heap机制来跟踪有问题的堆块。命令如下:

YOUR_WINDBG_INSATALL_LOCATIONgflags.exe /p /enable outlook.exe /full

下面的返回结果,表明命令成功执行。


对CVE-2018-8587(Microsoft Outlook)漏洞的深入分析

然后我们再次复现来监视新堆栈


对CVE-2018-8587(Microsoft Outlook)漏洞的深入分析

现在我们可以看到ECX指向的非零内存地址是不可读的,并且在将数据写入该内存地址时会发生异常。判断出程序很有可能尝试将数据写入未分配(或未释放)的内存地址。我们可以通过检查内存页面分配情况进行判断。内存页面分配情况显示这里依然存在保留的内存空间。如下图所示:


对CVE-2018-8587(Microsoft Outlook)漏洞的深入分析

我们现在需要弄清楚程序为什么要将数据写入未使用的内存页面。通过静态分析,我们可以看到ECX的值来自EDI,并且在调用MAPIAllocateBuffer之后程序似乎正在修改EDI,如下图所示:


对CVE-2018-8587(Microsoft Outlook)漏洞的深入分析

通过静态分析,我们了解到函数MAPIAllocateBuffer是RtlAllocateHeap的包装函数,它用于确保请求的堆大小参数不大于0x7FFFFFF7。这意味着它不是一个负值。但是,在这种情况下,它不会检查0是否可以作为参数。并且因为实际分配的堆大小比请求的堆大小多8个字节,所以这8个字节用0x0000000001000010填充。此后,MAPIAllocateBuffer在这8个字节后返回堆地址。因此,调用MAPIAllocateBuffer后的EDI值为8 加上 从RtlAllocateHeap接收的分配的堆地址。如下图所示:


对CVE-2018-8587(Microsoft Outlook)漏洞的深入分析
对CVE-2018-8587(Microsoft Outlook)漏洞的深入分析

从上面的静态分析中,我们可以基本判断出向保留的内存空间中写入数据的原因极有可能是由整数溢出引起的。结合调试,我们发现调用MAPIAllocateBuffer的堆大小值确实为0。但是,由于MAPIAllocateBuffer请求分配大小为0 + 8 = 8的堆,因此RtlAllocateHeap不会返回错误而是会返回正确的堆地址。但是,MAPIAllocateBuffer会向这8个字节写入0x0000000001000010,然后向用户返回无效的heap-tail地址。如下图所示:


对CVE-2018-8587(Microsoft Outlook)漏洞的深入分析

接下来,我们需要弄清楚为什么请求的堆大小的值变为0。结合调试和静态分析,我们发现0来自当前函数的参数:arg_4(eax = arg_4 * 4 + 4) 。但是,在调用当前函数时,arg_4的值不并是传入参数的值,说明此函数会修改arg_4。通过调试我们可以看到修改的功能是在子函数sub_65F7DA中完成的。如下图所示:


对CVE-2018-8587(Microsoft Outlook)漏洞的深入分析

通过对子函数sub_65F7DA的分析,我们发现它是另一个包装函数。经过一系列调试后,我们终于发现函数ReadFile,也就是arg_4的值,实际上来自PoC文件。如下图所示:


对CVE-2018-8587(Microsoft Outlook)漏洞的深入分析
对CVE-2018-8587(Microsoft Outlook)漏洞的深入分析

调试显示arg_4读取的文件中的内容为0xFFFFFFFF,通过整数溢出,使得传递的堆的分配大小为0xFFFFFFFF * 4 + 4 = 0。但是,程序没有检查这一点,导致在下一个堆中出现Out-of-Bounds问题。如下图所示:


对CVE-2018-8587(Microsoft Outlook)漏洞的深入分析

检查PoC文件,我们可以看到0xFFFFFFFF值确实存在。


对CVE-2018-8587(Microsoft Outlook)漏洞的深入分析

将其修改为0xAABBCCDD,我们再次执行调试并设置相同的断点来验证溢出是由这4个字节引起的。


对CVE-2018-8587(Microsoft Outlook)漏洞的深入分析
对CVE-2018-8587(Microsoft Outlook)漏洞的深入分析

到这里我们成功分析出漏洞产生的原因!

接下来,通过在Patch发布之后比较程序的汇编代码,我们可以看到现在程序已经添加了对所请求的分配堆大小的验证。如下图所示:


对CVE-2018-8587(Microsoft Outlook)漏洞的深入分析
解决方法

更新即可


Smart Greybox Fuzzing:一种功能更强效率更高的Fuzzer模型

$
0
0
前言

近期,有一群研究人员设计出了一种智能灰盒模糊测试模型,他们声称这种Fuzzer模型在搜寻代码库(解析复杂文件)漏洞方面跟现有Fuzzer相比,新模型的漏洞挖掘效率会更高。


Smart Greybox Fuzzing:一种功能更强效率更高的Fuzzer模型
简介

模糊测试 是一种寻找软件漏洞的技术,这种技术需要向待测目标发送恶意构造的输入数据,如果程序发生崩溃或没有执行预期行为,这就表明这里有可能存在安全漏洞。目前,模糊测试技术主要有三种类型:黑盒模糊测试,这种情况下测试人员对待测目标的情况一无所知;白盒模糊测试,这种情况下测试人员需要对待测目标的情况了如指掌,测试主要针对的是程序的源代码;灰盒模糊测试,这种情况下测试人员手上只有部分待测目标信息。

此前,五名分别来自新加坡国立大学、澳大利亚莫纳什大学和罗马尼亚布加勒斯特波利特尼卡大学的安全研究人员一直在寻找一种能够有效提升灰盒模糊测试效率的方法,而现在他们表示研究已经取得了显著成果。

这群研究专家基于American Fuzzy Lop( AFL ,一款由安全专家MichalZalewski开发的模糊测试工具)开发出了一款名叫AFLsmart的工具,而这款工具采用了一种他们称之为智能灰盒模糊测试(SGF)的技术。

根据专家的介绍,目前社区有大量专门用于解析复杂文件结构的代码库,比如说解析音频、视频、图片、文档和数据库文件等等,而AFLsmart在分析这类代码库方面的效率非常的高。

在基于代码覆盖的灰盒模糊测试技术中,测试人员需要向fuzzer提供一个种子文件,并通过随机翻转、删除、拷贝或添加比特位的形式生成新的文件,然后让待测目标(代码库)去解析这些文件以发现潜在的安全漏洞。但问题就在于,对于复杂的文件结构(格式),比特位翻转并不一定能生成有效的文件。

不过,研究人员通过定义一种“新型变异操作”来克服了这种困难,这种技术是在虚拟文件结构上实现的,而不是在比特位层,这样就可以确保文件的有效性了。研究人员在白皮书上写到:“我们引入了一种新颖的基于有效性的电源调度方式,它可以让SGF花费更多的时间来生成更容易通过程序解析阶段的文件,从而挖掘到隐藏得更深的业务处理逻辑漏洞。”

在他们的实验过程中,他们对目前十一款热门的用于处理二进制可执行文件(ELF)、图片、音频和视频文件的开源代码库进行了测试,测试名单中包括Binutils、LibPNG、ImageMagick、LibJPEG-turbo、LibJasper、FFmpeg、LibAV、WavPack和OpenJPEG。

研究人员使用AFLsmart对这些代码库进行了测试,并与AFL、AFLfast和Peach Fuzzer等模糊测试工具的结果进行了对比。AFLsmart发现了33个漏洞,是AFL和AFLfast挖到漏洞数量的一倍,而Peach一个漏洞都没找到。


Smart Greybox Fuzzing:一种功能更强效率更高的Fuzzer模型

实验结果表明,这款新型模糊测试工具总共发现了42个安全漏洞,其中的17个已经标记了CVE编号。挖掘到的漏洞类型包括断言失败、堆栈缓冲区溢出、空指针引用和除零错误等等。

项目地址

目前,研究人员已经将AFLsmart Fuzzer开源了,感兴趣的同学可以fork一下。

AFLsmartFuzzer:【 GitHub传送门 】

* 参考来源: securityweek ,FB小编Alpha_h4ck编译,转载请注明来自CodeSec.Net

MD5 should not be used in forensics (or anywhere else)

$
0
0

A few days ago, I drafted (but had not yet published) a post about using MD5 for validating or authenticating evidence in digital forensics. MD5 has had security problems for twenty years, but it's still been used in forensics, although the trend has been toward SHA-1 (which has some problems of its own) and SHA-2.

After drafting the post, I discovered that the Scientific Working Group on Digital Evidence has released a draft endorsing the use of MD5 and SHA-1. I wrote in to share my concerns, but I also reached out to some cryptographers via Twitter. Dr. Marc Stevens, a cryptographer known for his expertise in attacking MD5 and other hash functions, released a series of tweets that was even more critical of MD5 than I anticipated and that was incredibly damning for any forensic expert who continues to rely on MD5.

First, I'll share my original thoughts in abbreviated form. Then I'll share some highlights from Dr. Stevens' tweets. If you're interested in Dr. Stevens' views, consider reading all of what he had to say on Twitter and in his scientific work. If I have misrepresented or misunderstood his views in any way, I apologize.

When we image and process digital evidence, we use a hash function to fingerprint that data so that we can compare it to other known files and so that, later on, we can verify that the evidence hasn't changed. SHA-1 is probably the most common hash function used in forensics and there is some support for SHA-256, which is what we should be moving toward.

In order to be considered secure, a hash function should be strong against two attacks: collisions and preimages. A collision occurs when we find two "messages" (files, strings, whatever) that have the same hash value. To be secure, it should be hard to find two files that have the same hash. Note that in this scenario we are allowed to to pick both messages. If we can find any two that match, we have a collision. A preimage is a little different because one of messages has already been picked. To find a preimage, we have to find a second message that has the same hash value. The distinction is like the difference between trying to find two people in a room with the same birthday (anybody can match anybody) versus trying to find somebody in a room with your birthday.

MD5 is considered a weak hash function because there are practical attacks for findingcollisions. There aren't any practical attacks for finding preimages for MD5.

If we need to verify that a file hasn't changed, MD5 is plenty good enough to detect accidental modification. If the file was corrupted or inadvertently modified by a careless examiner, there is an infinitesimally small chance that the hash will come out the same. If we're worried that someone has intentionally altered the data, they would have to be able to execute an attack (find a preimage) that is beyond what anyone is currently able to do using publicly-known attacks. Hell, even if the file wasn't hashed, a court would probably not allow someone to assert that the evidence had been altered without some evidence suggesting it had.

So, we can use MD5, right?

I think you do so at your own peril.

The problem is that cryptographers, the people who are experts in making hashes and ciphers, have been saying not to use MD5 for 20 years and the attacks against MD5 have gotten much, much better since then. When a forensic examiner goes into court, he or she serves the court as an "expert". I feel like I could offer a reasonable defense/explanation for using MD5. I've read books on cryptography and took a grad-level class in it. I'm knowledgeable (enough to be dangerous). I think I understand it well enough to say that despite the warnings it's okay to use it in certain circumstances. But I'm not an expert in cryptography so why would I try to weigh in as one? [Note: Dr. Stevens' tweets indicate that he disagrees with my contention that MD5 would be acceptable in some circumstances. But, that's my point. Any situation where I think it might be okay to use MD5 is based on my amateur understanding of cryptography, not the expert-level understanding that he or his colleagues would have.]

There's an added complication. Even if MD5 is okay to use in these scenarios, trying to justify it without a good understanding of why could lead you into some murky waters. Simply not being careful about how you answer questions could get you trapped by a well-prepared attorney.

Imagine this: You go into court and explain how you verified the images in your case using MD5. The defense attorney asks you some very innocent questions about it: "What's MD5?", "can two files have the same hash?".

You give the best explanation that you remember from your training: "the odds of two files having the same hash are like 1 in 80 bajillion."

"So", he says "I couldn't just change the file and tweak it so the hash would be the same?"

"No way", you say. "It's like winning the lottery five times in a row."

The defense attorney smiles back at you and grabs a stack of papers off of his table. He has an article about how some researchers forged digital certificates that used MD5. He'd like you to read the highlighted portion. He has another about how the Flame malware hijacked windows Update because of MD5. Would you please read the paragraph he highlighted there as well? He picks up a USB drive and tell you he has pictures of Jack Black, James Brown, and Barry White and they all have thesame hash. He has a picture of aship and a planeand those two have the same hash. He'd like you to hash these files to demonstrate.

"So", he says again. "What you told us a few minutes ago about the hashes. It wasn't true, was it?"

I disagree: cryptography is notoriously hard to get right. You should rely on expert cryptographic advice. And the prevailing expert opinion is: do not use MD5 for security.

― Marc Stevens (@realhashbreaker) December 16, 2018

And nowhere MD5 actually helps you in court, and can only hurt, since any cryptographic expert would say it should not be used for that. While SHA2 would help you in court. So what would be the best advice?

― Marc Stevens (@realhashbreaker) December 16, 2018

I think these tweets are key because they argue (from his expert perspective) that we should not use MD5 but also point out that this is the prevailing opinion among cryptographers. This is really key because the methods that we use in a legal case are supposed to meet a standard, namely the Daubert standard which considers five factors:

1. Whether a theory or technique can be and has been tested

2. Whether the theory or technique has been subject to both peer review and publication

3. The known or potential error rate of the method

4. The existence and maintenance of standards controlling its operations; and

5. Whether it has attracted widespread acce

MSP Perspective: JumpCloud or Jamf?

$
0
0

MSP Perspective: JumpCloud or Jamf?
As end users start to leverage a wide range of platforms―Mac , windows , and linux ―MSPs are looking for the best ways to manage those platforms and the users on them. User and system management for Apple platforms in particular is a critical area for MSPs to explore, and while Jamf focuses solely on Apple platforms, JumpCloud provides Windows and Linux support as well. In order to evolve their product stack and best practices to meet the changing IT landscape, from an MSP perspective: JumpCloud or Jamf, which is the better solution? Redefining Device Management
MSP Perspective: JumpCloud or Jamf?
Depending upon the needs of the MSP and their clients, Jamf may not necessarily be in direct competition with JumpCloud. Instead of deciding between JumpCloud or Jamf, the optimal solution may actually be JumpCloud and Jamf working together in your product stack. Furthermore, in order for MSPs to decide which tools are most helpful, it is important to describe what each does functionally.

Jamf has been around for a long time and was recently acquired by a private equity firm. Jamf is an on-prem Apple iOS and macOS focused device management tool . MSPs and IT admins have been leveraging Jamf for a number of years to handle day-to-day management tasks for their Apple fleet of devices.

JumpCloud, however, is known as a cloud directory service . JumpCloud’s Directory-as-a-Service platform securely manages and connects users to their IT resources, including systems, applications, files, and networks―regardless of platform, protocol, provider, and location. Think of JumpCloud’s IDaaS platform as the reimagination of Active Directory for the cloud and cross-platform environments.

JumpCloud or Jamf―What’s best for your Customers?
MSP Perspective: JumpCloud or Jamf?

For MSPs, the choice of user and system management tools often boils down to customer needs. What systems and devices is the MSP chartered with managing? What IT resources do users need to access and are they in the cloud or on-prem? How does security get embedded into the process of delivering services to clients? Is system security critical? What about network security or identity security?

All of these critical needs are important to understand when (Read more...)

Airspace Launches Galaxy Drone Security Solution

$
0
0

Former McAfee, FireEye CEO David DeWalt Joins Airspace Board of Directors; Former FAA Administrator Michael Huerta Joins as Board Advisor

SAN FRANCISCO (BUSINESS WIRE) Airspace Systems today introduced Airspace Galaxy TM , the first family of fully-automated, always-on airspace security solutions that accelerate the integration of drones into cities and protects people and property ― on the ground and in the air ― from clueless, careless or criminal drone operators.


Airspace Launches Galaxy Drone Security Solution
Airspace Launches Galaxy Drone Security Solution

The new Airspace Galaxy security platform combines input from multiple sensors to detect drone activity at long-ranges, instantly identifies authorized and unauthorized flights, assesses risk, and if necessary and permitted, deploys an autonomous mitigation system to safely capture and remove an unauthorized or malicious drone.

“We created Airspace to accelerate the integration of lifesaving drone technologies while giving communities the ability to ensure safe and secure skies,” said Jaz Banga, Airspace co-founder and CEO. “Galaxy is the first crucial step toward creating the trusted environment required to unlock the full potential of drones.”

The airspace security company also today announced that cybersecurity veteran David DeWalt has invested in Airspace through the NightDragon Fund, and joined the Airspace board of directors as Vice Chairman. Additionally, Airspace announced that former Federal Aviation Administration Administrator Michael Huerta has joined the company’s board of advisors.

Airspace developed the Galaxy security platform for business, public venues, government, law enforcement, and the military to protect people, property, and IP from harm. Galaxy was recently deployed to detect and identify drone activity behind the scenes for Major League Baseball during the 2018 World Series games in Boston and Los Angeles, for the San Francisco Police Department in support of the U.S. Navy to protect its annual San Francisco Fleet Week, and in Sacramento for the 36th annual California International Marathon.

And in the fall, during the Chairman of the Joint Chiefs of Staff’s BLACK DART live-fire exercise, Galaxy was the only airspace security solution to deliver a fully autonomous drone mitigation capability from takeoff to landing capturing both stationary and moving targets.

“Airspace security is a prerequisite to realize the full potential of the drone economy,” said Huerta. “We are on the verge of many great things that drones can do for us, but without the kind of safety and security Airspace Galaxy offers, we are just one terrible event away from stalling what could be a thriving, multi-billion dollar industry.”

We believe in the good that drones can do

Drones have already proven critical in disaster response. Firefighters have used them to monitor ongoing fires to focus their efforts, keep themselves safe, and help them save lives. Emergency teams have used drones to survey damage after natural disasters, deliver supplies, and find missing people.

But as drones get smaller and cheaper, the potential physical and cyber threats grow exponentially. And regardless of whether a damaging drone event is caused by the nave or nefarious, the results will be the same: progress derailed, and benefits denied.

Airspace developed the Galaxy software platform to protect people, property, and IP by stopping drone threats before they happen.

Galaxy: Mobile, Modular, Simple to Operate

The critical first step in airspace security is accurate long-range detection of drone activity. As a modular system, Galaxy options include the ability to configure detection based on a customer’s site- and mission-specific requirements and includes identification of all types of drones, both signal and non-signal emitting.

The Airspace sensors detect anomalies operating from ground level to 400-feet and beyond in the sky, and cover up to a 25-mile radius. Detection comprises three primary functions: radio frequency (RF) sensors that use drone-to-operator communication links to legally identify a drone’s unique identifier and launch location, a camera array to minimize false alarms and improve localization, and communication alerts to the Galaxy operator.

Galaxy then fuses data from multiple sensors into a single, easy-to-use graphical user interface that is coupled with artificial intelligence (AI) and machine learning to create actionable intelligence for the system to handle automatically or with human override. Users can log in from a browser on their desktop or mobile device to see all pertinent information.

Finally, if necessary and permitted by law, the Airspace mitigation option dispatches the Airspace Interceptor drone with a single click. Using advanced guidance systems and powered by AI, the Interceptor autonomously locks onto identified rogue drones and heads them off at high speed without human guidance. Trusted and deployed by the U.S. Department of Defense, the Airspace Interceptor fires a Kevlar net to neutralize and capture unauthorized or malicious drones, and then delivers them to a safe place, preventing damage to either people or property.

“Thinking about security in two dimensions is antiquated ― it’s just not good enough to keep the bad guys out today,” said DeWalt, who has led two of the biggest companies in cybersecurity McAfee and FireEye and is now Delta Air Lines chairman of Safety & Security. “Today you have to protect in three dimensions basically create an airspace security dome over everything ― events, your company your entire city.”

Among many other positions, DeWalt is the founder of cybersecurity platform NightDragon Security and the managing director of early-stage investor AllegisCyber. He sits on the boards of several cybersecurity firms, including Optiv, Callsign, and Claroty, and he has served on the Department of Homeland Security’s National Security Telecommunications Advisory Committee since 2011. DeWalt was president and CEO of McAfee between 2005 and 2012 and was CEO of FireEye between 2012 and 2016.

“David’s and Michael’s experience across the cybersecurity and aviation industries is incredibly relevant to our mission to create autonomous airspace security and our vision of a world of safe and secure skies open for business and social good,” said Banga. “They are both equally strategic assets for Airspace.”

Airspace began producing Galaxy solutions that are now ready to deploy in three configurations after raising a $20-million Series A round led by Singtel Innov8 Ventures in March 2017. The company was founded in 2015 by a team from Apple, Google, and Cisco Systems, and backed by SterlingVC the venture capital arm of the New York Mets as well as Shasta Ventures, Granite Hill Capital Partner, Singtel Innov8, and S28 Capital.

About Airspace Systems Inc.

Airspace uses AI and advanced robotics to create fully automated, always-on solutions that deliver the three mandatory requirements of airspace security: long-range detection, instant identification, and safe capture and removal of unauthorized or malicious drones. Airspace solutions protect people, property, and IP for businesses, law enforcement, and the military. All Airspace solutions are mobile, modular, and simple to operate. Founded in San Francisco in 2015, Airspace is funded by early investors in Nest, Palantir, and Skype. For more information go to http://airspace.co/

Contacts

Lisa Tarter

Thin Protocols, Lack of Network Effects and A Theory of Value for Security Token ...

$
0
0
Thin Protocols, Lack of Network Effects and A Theory of Value for SecurityTokens

Jesus Rodriguez


Thin Protocols, Lack of Network Effects and A Theory of Value for Security Token ...

Understanding how value is created and accumulated in a technology market is the most effective, and arguably the hardest, way to develop a unique thesis about the space. In the case of security tokens, formulating a value creation thesis seems to be particularly difficult given the early fragmentation of the space. After a year of frantic development, hundreds of new players, many more press releases and the first group of issued tokens, we still don’t have a clear picture of how the main value creation avenues in the security token space. Today, I would like to explore a few basic ideas that might provide some clarity into how value will be created and accrued in the crypto-securities market.

The challenge of formulating a theory of value for security tokens is particularly difficult when the space still hasn’t seen major investments. Venture capitalists and institutional investors tend to deploy capital in areas in which they believe value will be created and accumulated and, consequently, are an early data point for short-term and long-term value creation thesis. In other words, “following the money” is a simpler way of formulating ideas about value creation across the lifetime of a technology market. In the case of security tokens, well….there is no money to follow yet.

When thinking about value creation in security tokens, there are three areas that concern me greatly:


Thin Protocols, Lack of Network Effects and A Theory of Value for Security Token ...

1) Product vs. Network Friction: Differently from other crypto markets, the crypto-securities space seems has been evolving as a collection of isolated products without an underlying network.

2) Lack of Long-Term Network Effects: Related to the previous point, the security token market doesn’t seem to be creating strong network effects as it evolves.

3) Thin Protocols Effect: The first wave of crypto-securities seem to be dominated by very basic protocols and tokens instead of a strong platform foundation.

Product vs. NetworkFriction

Until today, the security token market has been evolving as an isolated collection of products without an underlying network or incentives to influence the growth of the entire ecosystem. The value of a crypto-network is not only a great value-creation mechanism but also a channel to distribute value across the different participants in the network. With a network, some products might benefit from the value created by other participants in the network. Without that, value is going to be accumulated on specific product categories but not easily spread across the rest of the ecosystem.

Lack of NetworkEffects

Related to the previous point, the existing generation of security token products are creating very minimum long-term network effects. Most of the network effects in security token transfers happen at the Ethereum level which has very little ramifications onto the rest of the security token ecosystem.

Thin Protocols

The fat protocol thesis was one of the main value creation theories for crypto-assets. If you read this blog you know I am not a big fan of the fat protocol ideas but I recognize that it captures the value creation dynamics for a relevant part of the crypto ecosystem. In the case of security tokens, the entire space is based on very thin protocols like DS-Protocol or R-Token and applications. Why is this relevant? Well, why this type of thin protocols can capture value during specific areas of the lifecycle of security tokens like issuance or compliance, they are unlikely to capture or distribute long term value.

A Theory of Value for SecurityTokens

Recognizing the DNA and challenges of the current security token ecosystem, we can start formulating a basic theory of how value is going to be created in the space. If we visualize a timeline of the evolution of the security token space from the value creation perspective, we might get something like the following:


Thin Protocols, Lack of Network Effects and A Theory of Value for Security Token ...

Some notes that might help to understand the previous diagram:

The initial value in the security token space has been captured by the issuance platforms.

Slowly security token exchanges might start to accrue some value from the listing and trading activities of crypto-securities.

B2B scenarios such as the tokenization of corporate bonds will be one of the areas that capture a lot of value in the initial wave of security tokens.

Liquidity pools such as crowdfunding marketplaces might capture some relevant value in the security token ecosystem.

Crypto-financial protocols in areas such as debt, derivatives, disclosures or compliance have the network effect required to capture and distribute value for security tokens.

Sophisticated security token products such as derivatives that serve large institutional investors will be one of the ultimate value creation engines in the security token space.

If the idea of a blockchain specialized in security token materializes, we might see a shift on the value creation dynamics of the security token market.

These are some of my initial ideas about the value creation challenges and dynamics in the security token space. I expect some of the ideas outlined might result controversial or even incomplete but hopefully will help to trigger the debate about this important subject.

Viewing all 12749 articles
Browse latest View live