Quantcast
Channel: CodeSection,代码区,网络安全 - CodeSec
Viewing all 12749 articles
Browse latest View live

Threat Hunting When the Perimeter is Vague

$
0
0

Written by: Amiram Cohen

Are Domains Malicious?

The most basic capability of malware is the ability to communicate. Most malware will use the DNS protocol to enable robust communication. Typical malware payloads will use such techniques to download files to the compromised machine, or to communicate with the Command and Control (CnC) servers in order to control activities or exfiltrate data.

These days, the defensive perimeter is becoming a vague concept. This reality is the result of more personal devices getting in and out of the network. Moreover, networks have to contend with IoT devices that are missing embedded protection and often invisible to corporate monitoring and defensive planning. Situations like these are why security teams need to examine network traffic, and block malicious activity.

The biggest challenge organizations face when looking at network traffic and analyzing suspicious domains is determining which of them are malicious and which are benign. In most cases, the domain name is out of context as a stand alone indicator for malicious activity. More information is typically needed in order to add context and provide a better understanding of the domain in question.

In this post, we'll help you get better context on the potential for malicious activity when looking at suspicious domains. There are a variety of security intelligence data sources and services available to the public, both free and paid, that with can greatly increase the accuracy of decision making.

Ready, Get Set, Let's Go...

One of the first things an enterprise security specialist needs to do when analyzing traffic is determine if a suspicious domain was accessed from within the enterprise to a remote resource. In this scenario, we look for possible indicators and resources that might help with context to the inspected domain.

When examining the domain we should take several things into consideration:

Was the domain classified as being malicious in the past?

What can we learn from domain's registrant information?

What can we learn from the history of that domain?

Are there indicators based on whois records and where the domain is hosted?

Can we see any relationship, similarity, or pivots between the inspected domain and other malicious domains?

Can we learn something from the traffic and popularity of the inspected domain?

Third Party Indicators

Our first step when looking into a suspicious domain is to understand if there is already evidence in the wild tying this domain to malicious activity. There are many publicly available tools offering information about domains and the indicators flagging them as either malicious or benign.

Before we dive into using tools, remember that many of the results that come from third-party resources should be taken with a grain of salt. Many of these tools are automatic, black and white mechanisms, and are not 100% accurate. However, several red flags together can be a strong indicator of something malicious.

One of the most well known public services is VirusTotal. This service allows you to easily determine if a given domain is linked to malicious malware activity by variety of antivirus vendors. VirusTotal can, in some cases, show the relationship between the suspicious domain and malicious files hosted on the domain.


Threat Hunting When the Perimeter is Vague

Figure 1: VirusTotal community analysis portal for suspicious files and URLs detection

There are many other reliable services that give the users the ability to automatically analyze a domain and get indications its maliciousness. Another simple, yet effective, approach is to query your favorite search engine for indications that tie the suspected domain to other malicious activities. In this case, a simple search for the domain with keywords like "malicious" or "phishing" will do the job. Be careful not to accidentally browse to the suspected domain and expose your computer to unnecessary threats.

Here is a short list of services that may help with determining if a domain is malicious:

PhishCheck or CheckPhish - Online, on-demand phishing check engine.

Malwares.com / Hybrid-analysis.com / Totalhash - Malware analysis systems.

WOT - Ranking service that support public reviews for a domain.

Domain Information and History

Sometimes, there isn't a third party indicator available on the domain in question. In these situations, we can look for other publicly available information related to the domain, such as registration details, or WHOIS records.

A WHOIS service can create additional context for the inspected domain. The more interesting fields are the date-formatted fields and the registrant fields. The date-formatted fields generally indicate the age of a domain. For example, a newly registered (or changed) domain should be inspected more carefully as it may represent an emerging threat. A malicious domain may be registered with fake information and analyzing that information may help with determining the true identity behind the domain.
The " domain privacy " service may also be used by domain owners (explain) and the usage of privacy should be considered in the overall context of other findings on the suspicious domain.

WHOIS services can be queried in several ways:

linux bash - type "whois example.com" in the terminal (or see some docs here ).

windows command line - a simple windows binary querying tool available for download from here .

Online Whois service - a nice way to separate ourselves from the suspected domain. there are many services out there, who.is is the simplest one.

Many common scams, phishing, and malware distribution domains can be discovered by the URL. We strongly advise against linking directly to a suspicious domain. However, some online services can take a screenshots for us and safely do the job. Try

Automated Dashboard with various correlation visualizations in R

$
0
0

(This article was first published on R Programming DataScience+ , and kindly contributed toR-bloggers)

Categories

Programming

Tags

Correlation Data Visualisation R Programming

In this article, you learn how to make Automated Dashboard with various correlation visualizations in R. First you need to install the `rmarkdown` package into your R library. Assuming that you installed the `rmarkdown`, next you create a new `rmarkdown` script in R.

After this you type the following code in order to create a dashboard with rmarkdown and flexdashboard :

--- title: "Dashboard visualizations in R: Scatter plots" author: "Kristian Larsen" output: flexdashboard::flex_dashboard: orientation: rows vertical_layout: scroll --- ```{r setup, include=FALSE} library(flexdashboard) # install.packages("ggplot2") # load package and data options(scipen=999) # turn-off scientific notation like 1e+48 library(ggplot2) theme_set(theme_bw()) # pre-set the bw theme. data("midwest", package = "ggplot2") midwest <- read.csv("http://goo.gl/G1K41K") # bkup data source options(scipen = 999) library(ggplot2) library(ggalt) library(plotly) midwest_select 350000 & midwest$poptotal 0.01 & midwest$area < 0.1, ] # load package and data library(ggplot2) data(mpg, package="ggplot2") # alternate source: "http://goo.gl/uEeRGu") theme_set(theme_bw()) # pre-set the bw theme. g <- ggplot(mpg, aes(cty, hwy)) # load package and data library(ggplot2) data(mpg, package="ggplot2") mpg <- read.csv("http://goo.gl/uEeRGu") # load package and data library(ggplot2) data(mpg, package="ggplot2") # mpg <- read.csv("http://goo.gl/uEeRGu") ``` Row ----------------------------------------------------------------------- ### Chart A: Scatterplot ```{r} gg <- ggplot(midwest, aes(x=area, y=poptotal)) + geom_point(aes(col=state, size=popdensity)) + geom_smooth(method="loess", se=F) + xlim(c(0, 0.1)) + ylim(c(0, 500000)) + labs(subtitle="Area Vs Population", y="Population", x="Area", title="Scatterplot", caption = "Source: midwest") plot(gg) ggplotly(p = ggplot2::last_plot()) ``` ### Chart B: Scatterplot + Encircle ```{r} ggplot(midwest, aes(x=area, y=poptotal)) + geom_point(aes(col=state, size=popdensity)) + # draw points geom_smooth(method="loess", se=F) + xlim(c(0, 0.1)) + ylim(c(0, 500000)) + # draw smoothing line geom_encircle(aes(x=area, y=poptotal), data=midwest_select, color="red", size=2, expand=0.08) + # encircle labs(subtitle="Area Vs Population", y="Population", x="Area", title="Scatterplot + Encircle", caption="Source: midwest") ``` Row ----------------------------------------------------------------------- ### Cart C: Jitter Plot ```{r} g + geom_point() + geom_smooth(method="lm", se=F) + labs(subtitle="mpg: city vs highway mileage", y="hwy", x="cty", title="Scatterplot with overlapping points", caption="Source: midwest") ggplotly(p = ggplot2::last_plot()) ``` ### Cart D: Jitter Points ```{r} # Scatterplot theme_set(theme_bw()) # pre-set the bw theme. g <- ggplot(mpg, aes(cty, hwy)) g + geom_jitter(width = .5, size=1) + labs(subtitle="mpg: city vs highway mileage", y="hwy", x="cty", title="Jittered Points") ggplotly(p = ggplot2::last_plot()) ``` Row ----------------------------------------------------------------------- ### Chart E: Counts Chart ```{r} # Scatterplot theme_set(theme_bw()) # pre-set the bw theme. g <- ggplot(mpg, aes(cty, hwy)) g + geom_count(col="tomato3", show.legend=F) + labs(subtitle="mpg: city vs highway mileage", y="hwy", x="cty", title="Counts Plot") ggplotly(p = ggplot2::last_plot()) ``` ### Chart F: Bubble plot ```{r} # load package and data library(ggplot2) library(gganimate) data(mpg, package="ggplot2") # mpg <- read.csv("http://goo.gl/uEeRGu") mpg_select <- mpg[mpg$manufacturer %in% c("audi", "ford", "honda", "hyundai"), ] # Scatterplot theme_set(theme_bw()) # pre-set the bw theme. g <- ggplot(mpg_select, aes(displ, cty)) + labs(subtitle="mpg: Displacement vs City Mileage", title="Bubble chart") g + geom_jitter(aes(col=manufacturer, size=hwy)) + geom_smooth(aes(col=manufacturer), method="lm", se=F) ggplotly(p = ggplot2::last_plot()) ``` Row ----------------------------------------------------------------------- ### Chart G: Marginal Histogram / Boxplot ```{r} # load package and data library(ggplot2) library(ggExtra) data(mpg, package="ggplot2") # mpg <- read.csv("http://goo.gl/uEeRGu") # Scatterplot theme_set(theme_bw()) # pre-set the bw theme. mpg_select = 35 & mpg$cty > 27, ] g <- ggplot(mpg, aes(cty, hwy)) + geom_count() + geom_smooth(method="lm", se=F) ggMarginal(g, type = "histogram", fill="transparent") ggMarginal(g, type = "boxplot", fill="transparent") # ggMarginal(g, type = "density", fill="transparent") ``` ### Chart H: Correlogram ```{r} # devtools::install_github("kassambara/ggcorrplot") library(ggplot2) library(ggcorrplot) # Correlation matrix data(mtcars) corr <- round(cor(mtcars), 1) # Plot ggcorrplot(corr, hc.order = TRUE, type = "lower", lab = TRUE, lab_size = 3, method="circle", colors = c("tomato2", "white", "springgreen3"), title="Correlogram of mtcars", ggtheme=theme_bw) ```

Screenshot:


Automated Dashboard with various correlation visualizations in R

The result of the above coding are published with RPubs here .

References Using flexdashboard in R

Related Post

Automated Dashboard with Visualization and Regression for Healthcare Data Create easy automated dashbords with R and Markdown Send Desktop Notifications from R in windows, linux and Mac CHAID vs. ranger vs. xgboost ― a comparison Common Mistakes to Avoid When Learning to Code in python

First major Kubernetes flaw enables hackers to access backend servers undetected

$
0
0

A Google team first designed the Kubernetes tool, which now is managed by the nonprofit Cloud Native Computing Foundation. (Wikimedia Commons)

Share
First major Kubernetes flaw enables hackers to access backend servers undetected
First major Kubernetes flaw enables hackers to access backend servers undetected
First major Kubernetes flaw enables hackers to access backend servers undetected
First major Kubernetes flaw enables hackers to access backend servers undetected
First major Kubernetes flaw enables hackers to access backend servers undetected

Written byJeff Stone

Dec 5, 2018 | CYBERSCOOP

Researchers have uncovered the first known security flaw in Kubernetes, a popular open-source tool for managing application workloads.

Developers published three security updates this week that promised to protect users of Kubernetes, a containerized application system, from a new vulnerability that could make it possible for hackers to inject malicious code or bring down an app from behind an organization’s firewall. Kubernetes runs on top of operating systems, taking commands from an administrator or developer and passing those instructions to nodes throughout an environment.

This bug, the first major issue found in Kubernetes, warranted a 9.8 out of 10 severity score on because it could allow outsiders to establish a connection through Kubernetes’ trusted-application program interface to backend servers, ZDNet reported .

From there, hackers can use that authentication to send arbitrary or malicious requests disguised under valid Kubernetes credentials, using that access to gain full administrator privileges. Exploiting the flaw requires low difficult and does not require direct user interaction.

“There is no simple way to detect whether this vulnerability has been used,” reads the GitHub post where the vulnerability was first announced last week. “Because the unauthorized requests are made over an established connection, they do not appear in the Kubernetes API server audit logs or server log. … In default configurations, all users (authenticated and unauthenticated) are allowed to perform discovery API calls that allow this escalation.”

Kubernetescan be used as a container platform, a microservices platform or as a portable cloud platform that also facilitates automation, according to its website . It also can be used to orchestrate computing, networking and storage infrastructure on behalf of developers’ workloads.

Google senior staff engineer Jordan Liggitt said in an update to the GitHub post Monday that Kubernetes version v1.10.111, v1.11.5 and v1.12.3 now are available to fix the vulnerability, known as CVE-2018-1002195.

Kubernetes first was designed by Google and now is maintained by the nonprofit Cloud Native Computing Foundation.

-In this Story- cloud , Containerization , Kubernetes , microservices , open source , servers , vulnerabilities

汽车系统如何变得更安全?QNX 说要做到这七点

$
0
0

我们不得不承认一个事实,当汽车越来越智能的时候,随之而来的风险也越来越大。

这不是一个耸人听闻的说法。早在前几年,菲亚特克莱斯勒就由于车机被黑客入侵后远程遥控而进行了大规模召回,更不用说因为「网红效应」而被各种破解的特斯拉,以及系统不稳定经常出现「死机」的新晋网红蔚来 ES8。总之,一旦汽车软件出现问题,那么就会对驾驶安全带来很大的隐患。

特别是当自动驾驶、网联化离我们越来越近的时候,车辆结构也变的更复杂。目前汽车电子架构可能是由 60~100 个 ECU 以及 6~8 个独立的系统组成。但随着智能化加快,未来汽车会由 6~10 个高性能计算平台(HPC)组成,软件系统也会实现整合,并且具有 OTA 的能力。

数据能够很直观的告诉我们可能存在的风险。卡耐基梅隆大学软件工程学院的一份报告指出,在美国开发的代码平均每个功能点会有 0.75 个缺陷,每一百万行代码就会有大约 6000 个缺陷。

而达到「很好」级别的代码,则要求每一百万行代码的缺陷数量为 600~1000 个;「优异」级别的代码要求是少与 600 个。(缺陷中大约有 1~5%的部分会成为漏洞)

换句话说,即使所有代码都达到了「很好」的级别,按照目前汽车平均 1 亿行代码来计算,每辆车里都可能有 10 万个缺陷以及 1000~5000 个漏洞。

这些缺陷以及漏洞可能会造成什么样的风险?没有人可以预测。

「安全白皮书」

当然,这篇文章并不是想危言耸听。既然有风险,肯定就有应对的办法。

在汽车软件以及安全领域,Blackberry QNX 是绕不开的一个参与者。 在前两天,GeekCar 有机会和 Blackberry 技术解决方案部(BTS)销售与营销高级副总裁 Kaivan Karimi 聊了聊关于智能化进程中,他们对于汽车安全的看法。


汽车系统如何变得更安全?QNX 说要做到这七点

关于 Blackberry QNX,其实行业内的小伙伴都不会陌生。2010 年,Blackberry 宣布收购 QNX。Blackberry 本身在安全领域就有超过 30 年的经验,曾经很热门的手机业务就是以安全和商务为最大卖点。QNX 作为汽车领域最大的操作系统供应商,为安全认证软件提供支持已经超过 35 年。

Kaivan Karimi 告诉 GeekCar,对于汽车领域可能存在的风险,QNX 运用多年的行业经验,总结出了一套「指导方针」。无论是主机厂还是供应商,只要遵循这份保护汽车免受网络安全威胁的建议框架,就能预防绝大多数的风险。即使出现可能影响驾驶的漏洞,也能很快进行修复。


汽车系统如何变得更安全?QNX 说要做到这七点

这份《汽车网络安全――BlackBerry 的七大关键标准建议》概括了以下 7 个要点:

保障供应链安全:

通过确保汽车中的每一个芯片和电子控制单元 (ECU) 能够正确地进行身份验证并装载受信任的软件,而不受到供应商或制造商的影响,从而建立信任的根源。扫描部署的所有软件以符合标准和所需的安全状况。从漏洞和渗透测试的角度对供应链进行定期评估,以确保他们得到认证并批准交付。

使用值得信赖的组件:

使用安全的硬件、软件和应用程序,在深度体系结构中深度分层,创建一个安全体系结构。

采用隔离手法与受信通信:

使用电子系统架构来隔离安全关键和非安全关键的 ECU,并且在检测到异常时也可以保障安全运行。另外,这种方法也可以确保汽车中的电子设备和外部世界之间的通信都是安全可靠的。更为重要的是,ECU 之间相互的通信需要值得信赖和安全。

现场安全检查:

确保所有 ECU 都集成了分析和诊断软件,可以记录所发生的事件,并将结果发送至云端以进一步分析并启动预防性操作。此外,汽车制造商应该确认一系列指标定期自动扫描检测,当汽车在事件发生现场时,也能够通过安全的无线网络 (OTA) 软件更新来解决问题。

构建事件快速响应网络:

在参与的企业网络中共享常见的漏洞和风险,这样专家团队就可以相互学习,并在较短的时间内提供建议和修复方法。

使用生命周期管理系统:

一旦发现问题,自动利用安全的 OTA 更新软件。积极采取证书管理来管理安全凭证,并部署统一的端点策略管理来管理在汽车生命周期内下载的应用程序。

组织内建立安全文化:

确保汽车电子供应链中的每一个企业都接受功能安全以及安全保障最佳案例的培训,并在企业中形成安全文化。

「未来 5 年内只剩 2~3 种系统」

对于很多人来说,我们在车里最直观能接触到的软件就是 IVI(车载信息娱乐)系统。 事实上,目前国内主机厂或者供应商在开发 IVI 系统的时候,大多数都以 Android 系统作为基础。

这么做的好处很明显,首先是开发难度。国内具有 Android 系统开发能力的工程师数量多,团队建设更容易。其次,Android 的第三方应用生态成熟。无论是通过接入 API 或者是 SDK 的方式,都能迅速把移动互联网的服务能力移植到车里。

除了 Android,还有以 Tesla 为代表的的 linux 阵营。这两种系统本质上都是开源的系统,主机厂能够有更大的话语权来进行系统层面的定制,得到更个性化的产品。

Kaivan Karimi 告诉我,基于开源软件开发虽然在初期会有一些优势,但是在最关键的安全层面却容易出现风险。毕竟开源的背后,也代表有更多可能被入侵的风险。虽然开源软件在初期开发上费用会比 QNX 更低,但后续的维护成本会更高。

另外,我们之前在讨论自主品牌开发 Android 系统车机时就提到过,这样的策略也容易受限于 Google 的开发节奏。毕竟车机硬件没办法做到很快的迭代,对系统的持续维护也会有更高要求。

Kaivan Karimi 表示:「Linux 系统 5 年内会在汽车内消失,未来只会存在 2~3 种操作系统。」虽然这样的说法有些绝对,但也说明从安全厂商的角度看,系统数量越多就意味着越大的风险。

在去年,QNX 发布了 Hypervisor 2.0 系统。通过这套系统的虚拟化技术,能够将液晶仪表、IVI 系统、ADAS 系统等多个操作系统合并到单一芯片系统上。并且系统能够将各个模块分隔,这样即使有某个部分发现风险,也能避免影响到其余的功能。比如当 IVI 系统发生错误死机,也不会影响到车辆底层和驾驶安全相关的系统功能。特别是当自动驾驶技术的应用开始普及,这样的功能就显得更有必要了。


汽车系统如何变得更安全?QNX 说要做到这七点

Hypervisor 2.0 系统能够支持类似运行 Android 系统软件。Kaivan Karimi 告诉我,无论是 Android 或者是百度的产品,都能够在 QNX 的 IVI 系统中运行。这样也弥补了 QNX 在第三方生态方面的不足。据 GeekCar 的了解,国内座舱领域目前比较主流的一种开发方式就是「QNX 仪表+QNX Hypervisor+Android」。

关于汽车的系统安全,无论多么重视都不为过,毕竟谁都不想坐在一辆不可控的车里。从这个角度看,类似 Blackberry QNX 这样的安全服务商虽然对普通用户没有明显的存在感,但扮演的角色至关重要。

原创声明: 本文为 GeekCar 原创作品,欢迎转载。转载时请在文章开头注明作者和「来源自 GeekCar」,并附上原文链接,不得修改原文内容,谢谢合作!

欢迎关注 GeekCar 微信公众号:GeekCar 极客汽车(微信号:GeekCar)& 极市(微信号:geeket)。
汽车系统如何变得更安全?QNX 说要做到这七点

Rob Allen: Migrating to password_verify

$
0
0

In a new post to his site, Rob Allen walks through the process of migrating an older site to use the password hashing functions in php instead of the previous custom implementation.

I’ve recently been updating a website that was written a long time ago that has not been touched in a meaningful way in many years. In addition to the actual work I was asked to do, I took the opportunity to update the password hashing routines.

This site is so old that the passwords are stored using MD5 hashes and that’s not really good enough today, so I included updating to bcrypt hashing with password_hash() and password_verify() in my statement of work.

I’ve done this process before, but don’t seem to have documented it, so thought I’d write it the steps I took in case it helps anyone else.

He starts off by taking all of the current passwords (not plain-text, already hashed) and migrating them all to their bcrypt-ed version. He then updates the login functionality to select the account by email and check the record's password value with the password_verify function. Finally, he updates the system to rehash the plain-text password value (received from the user and verified) with bcrypt and save that back to the database and updated the password hashing method on user account creation.

Veracode 2018软件安全报告:绝大多数漏洞修复周期超过1个月

$
0
0

近期,Veracode 公司发布《2018年软件安全报告(第9版)》,主要内容如下:

一、概述

报告数据来自真实存在的应用程序,对2万多亿行代码进行了70万次扫描。扫描时间为期一年(2017年4月1日至2018年3月31日)。

遵循行业最佳实践

首次扫描的 OWASP Top 10 合规通过率连续三年下降,将至22.5%。

超过85%的应用程序中至少存在1个漏洞;超过13%的应用程序中至少存在1个严重的安全缺陷。

今年的漏洞修复率提升了12个百分点,客户修复了至少70%的已发现漏洞。

软件中仍然充斥着易受攻击的组件。约87.5%的 Java 应用程序、92%的 C++ 应用程序和85.7%的 .NET 应用程序中包含至少一个易受攻击的组件。

应用程序中出现的最常见的漏洞基本和之前一致。SQL 注入缺陷仍然在近三分之一的应用程序中存在;跨站点脚本漏洞 (XSS) 仍然在近50%的应用程序中存在。

漏洞修复行为

超过70%的缺陷自发现之日起1个月内仍未被修复,近55%的缺陷自发现之日起3个月内仍未被修复。

四分之一的高危和严重缺陷在发现之日起290天内仍未被修复。

相比每年扫描次数为7到12次的应用程序,每年仅扫描1到3次的应用程序中漏洞存在的持久性比前者长3.5倍。

DevSecOps 独角兽在修复漏洞的速度方面遥遥领先;最活跃的 DevSecOps 计划修复缺陷的速度比一般组织机构快11.5倍。

基础设施行业、制造业和金融业在完全修复已发现缺陷方面最艰难。

二、软件安全整体状况

这份报告或许能很好地解释为何很多安全专业人员在想到应用程序安全 (AppSec) 时会感到焦虑:缺陷的数量如此之多,而易受攻击的 app 所占比例之高让人咂舌。

从漏洞分类来看,缺陷的流行程度和之前相比基本一致。去年,前10大普遍存在的缺陷类型基本并没有什么变化。也就是说在开发组织内提高关于严重漏洞(如加密缺陷、SQL 注入和跨站脚本漏洞)意识方面,组织机构并未取得突破。这可能是因为组织机构正在忙于将安全最佳实践嵌入安全开发生命周期 (SDLC) 中,而不管这些标准从何而来。

从客户的修复速度来看,首次发现漏洞后第一周,组织机构仅修复约15%的漏洞问题;发现后第一个月,修复率仅为不到30%;到第三个月,修复率不到50%,仅稍高于45%。

组织机构修复缺陷的平均速度反映的不仅是 AppSec 计划性能,也是衡量应用程序风险的标杆。从漏洞持久性角度分析来看,漏洞发现21天后,75%的缺陷仍然存在;121天后,50%的缺陷仍然存在;472天后,25%的缺陷仍然存在。


Veracode 2018软件安全报告:绝大多数漏洞修复周期超过1个月
三、修复状况

可以确定的一点是,多数组织机构中存在的漏洞数量就足以让他们必须在安全性、实践性和速度方面做出衡量。组织机构马上要解决的漏洞实在太多了,因此要求对这些漏洞进行智能优先排序,率先关闭最具风险的漏洞。

报告提出了“缺陷的持久性间隔 (Flaw Persistence Intervals)”的概念,即25%、50%和75%的漏洞在既定的应用程序中会停留多久。

从漏洞的严重程度来看,严重程度越高,修复速度越快。修复25%的最严重漏洞需14天,修复50%的最严重漏洞需64天,而修复75%的最严重漏洞需206天。组织机构修复严重和高危漏洞的速度要比修复其它漏洞的速度快57%。


Veracode 2018软件安全报告:绝大多数漏洞修复周期超过1个月

从地域分布角度来看,亚太地区修复首次发现漏洞的速度最快,仅需8天就修复25%的漏洞;美洲(主要是美国)修复速度为平均数(22天);EMEA 地区最慢,为28天。


Veracode 2018软件安全报告:绝大多数漏洞修复周期超过1个月

如下是从国别角度统计的修复情况。


Veracode 2018软件安全报告:绝大多数漏洞修复周期超过1个月

如下是从行业角度统计的漏洞修复情况。


Veracode 2018软件安全报告:绝大多数漏洞修复周期超过1个月

除了修复速度以外,开发人员修复漏洞的方式也值得一提。在所发现的漏洞中,51.9%的漏洞被修复,43.9%的漏洞未被修复,4.3%的漏洞得到缓解。至于开发人员为何选择缓解而非以更改代码的方式解决漏洞问题,从开发人员采取的缓解措施方式可见一斑。开发人员缓解漏洞问题的方式主要包括通过设计缓解 (2.8%)、潜在的误报情况 (1.1%)、评审后决定不采取措施 (0.1%)、通过 OS 环境缓解 (0.1%) 、通过网络环境缓解(< 0.1%)等。而且潜在的误报情况并非开发人员选择采取缓解措施的第一原因。

组织机构如何确定漏洞修复的优先级?报告中所述的漏洞持久性间隔实际上并未深入研究策略对修复时机的影响。一般而言,每家组织机构策略的不同都会导致客户修复行为的不同。从分析来看,很多策略显然将漏洞严重性考虑在内,一些策略可能考虑到漏洞的利用性,其它一些策略可能强调的是某些漏洞类别,也有一些策略会基于应用程序对业务的重要程度决定修复方案的发布方式。开发人员可能会根据所在组织机构的策略来安排修复漏洞的计划,客户可能会基于上述因素决定修复方案或者会基于组织机构或行业独有的因素做出决定。值得思考的一点是,组织机构需要开始思考影响修复方案的首要因素是什么。

四、常见的漏洞类型

和去年相比,最常见的漏洞类型基本一样。前四大漏洞类型出现在超过一半的所测应用程序中,也就是说,多数应用程序中存在信息泄露、加密问题、不良的代码质量以及 CRLF 注入问题。

最常见的20大漏洞类型包括:信息泄露、加密问题、代码质量、CRLF 注入、XSS、目录遍历、输入验证不充分、凭证管理、SQL 注入、封装问题、时间和状态 (time and state) 问题、命令或参数注入、API 滥用、不受信任的初始化问题、会话固定问题、潜在后门、竞争条件、代码注入、出错处理、不受信任的搜索路径问题。


Veracode 2018软件安全报告:绝大多数漏洞修复周期超过1个月

不过在动态分析安全测试 (DAST) 中,常见的漏洞问题类型有所不同。


Veracode 2018软件安全报告:绝大多数漏洞修复周期超过1个月

常见漏洞的在修复速度方面(持久性)分析结果如下。


Veracode 2018软件安全报告:绝大多数漏洞修复周期超过1个月
五、DevSecOps 效应

DevSecOps 实践席卷全球。越来越多的企业意识到由 DevOps 实践激发的软件交付的速度常常会在数字化转型和业务竞争力方面起着决定性作用。CA Technologies 发布的一项研究报告指出,执行DevOps 和敏捷进程的组织机构的收益和利润增长要比同行高60%,而且业务增长(超过20%)的速度要比同行高2.4倍。

报告数据分析显示,采用 DevSecOps 持续性软件交付实践的用户要比一般组织机构修复漏洞的速度更快。

六、不同行业面临的应用程序风险

不同行业面临的最常见的漏洞类型如下:


Veracode 2018软件安全报告:绝大多数漏洞修复周期超过1个月

不可否认,所测试的数量最多的应用程序来自金融行业。虽然金融组织机构享有“拥有最成熟的网络安全实践”的名声,但数据分析显示,该行业和其它行业一样都在力保行业应用程序安全第一的地位。从漏洞持久性分析来看,该行业的应用程序漏洞停留的时间要长于其它行业。政府和教育行业修复漏洞的速度在加快。医疗行业修复app 风险的速度要快于其它行业。

七、总结

安全专业人员、开发人员和业务主管能够获得的启发包括:

修复速度至关重要。组织机构修复代码漏洞的速度直接反映了应用程序的风险情况。修复的速度越快,软件风险越低。

全面考虑风险情况。企业应用程序中未修复漏洞的庞大数量不可能立刻解决,因此必须排好优先顺序。

DevSecOps 作用显著。数据显示,组织机构每年扫描的次数越多,漏洞修复方案推出得越快。DevSecOps 所带来的高频率的新变化使其修复漏洞的速度相比传统 dev 团队而言简直是光速。

企业仍然备受软件中的开源组件问题的困扰。企业不仅应该考虑库和框架中的漏洞问题,而且还应考虑组件是如何被使用的问题。如能改变某些组件的使用方式,使缺陷免于被利用,那么也就存在缓解此缺陷的方式。

本文由360代码卫士翻译自 Veracode

声明:本文来自代码卫士,版权归作者所有。文章内容仅代表作者独立观点,不代表安全内参立场,转载目的在于传递更多信息。如需转载,请联系原作者获取授权。

感性认识JWT

$
0
0

好久没写博客了,因为最近公司要求我学 spring cloud ,早点将以前软件迁移到新的架构上。所以我那个拼命的学呐,总是图快,很多关键的笔记没有做好记录,现在又遗忘了很多关键的技术点,极其罪恶!

现在想一想,还是踏踏实实的走比较好。这不,今天我冒了个泡,来补一补前面我所学所忘的知识点。

想要解锁更多新姿势?请访问我的博客。

常见的认证机制

今天我么聊一聊JWT。

关于JWT,相信很多人都已经看过用过,他是基于 json 数据结构的认证规范,简单的说就是验证用户登没登陆的玩意。这时候你可能回想,哎哟,不是又那个session么,分布式系统用 redis 做分布式session,那这个jwt有什么好处呢?

请听我慢慢诉说这历史!

最原始的办法--HTTP BASIC AUTH

HTTP BASIC auth,别看它名字那么长那么生,你就认为这个玩意很高大上。其实原理很简单,简单的说就是每次请求API的时候,都会把用户名和密码通过 restful API 传给服务端。这样就可以实现一个 无状态 思想,即每次HTTP请求和以前都没有啥关系,只是获取目标URI,得到目标内容之后,这次连接就被杀死,没有任何痕迹。你可别一听无状态,正是现在的热门思想,就觉得很厉害。其实他的缺点还是又的,我们通过http请求发送给服务端的时候,很有可能将我们的用户名密码直接暴漏给第三方客户端,风险特别大,因此生产环境下用这个方法很少。

Session和cookie

session和cookie老生常谈了。开始时,都会在服务端全局创建session对象,session对象保存着各种关键信息,同时向客户端发送一组 sessionId ,成为一个cookie对象保存在浏览器中。

当认证时,cookie的数据会传入服务端与session进行匹配,进而进行数据认证。


感性认识JWT

此时,实现的是一个 有状态 的思想,即该服务的实例可以将一部分数据随时进行备份,并且在创建一个新的有状态服务时,可以通过备份恢复这些数据,以达到数据持久化的目的。

缺点

这种认证方法基本是现在软件最常用的方法了,它有一些自己的缺点:

安全性 。cookies的安全性不好,攻击者可以通过获取本地cookies进行欺骗或者利用cookies进行 CSRF 攻击。 跨域问题 。使用cookies时,在多个域名下,会存在跨域问题。 有状态 。session在一定的时间里,需要存放在服务端,因此当拥有大量用户时,也会大幅度降低服务端的性能。 状态问题 。当有多台机器时,如何共享session也会是一个问题,也就是说,用户第一个访问的时候是服务器A,而第二个请求被转发给了服务器B,那服务器B如何得知其状态。 移动手机问题 。现在的智能手机,包括安卓,原生不支持cookie,要使用cookie挺麻烦。 Token认证(使用jwt规范)

token 即使是在计算机领域中也有不同的定义,这里我们说的token,是指 访问资源的凭据 。使用基于 Token 的身份验证方法,在服务端不需要存储用户的登录记录。大概的流程是 这样的:

客户端使用用户名跟密码请求登录 服务端收到请求,去验证用户名与密码 验证成功后,服务端会签发一个 Token,再把这个 Token 发送给客户端 客户端收到 Token 以后可以把它存储起来,比如放在 Cookie 里 客户端每次向服务端请求资源的时候需要带着服务端签发的 Token 服务端收到请求,然后去验证客户端请求里面带着的 Token,如果验证成功,就向客户端返回请求的数据

Token机制,我认为其本质思想就是将session中的信息简化很多,当作cookie用,也就是客户端的“session”。

好处

那Token机制相对于Cookie机制又有什么 好处 呢?

支持跨域访问: Cookie是不允许垮域访问的,这一点对Token机制是不存在的,前提 是传输的用户认证信息通过HTTP头传输. 无状态 :Token机制本质是校验, 他得到的会话状态完全来自于客户端, Token机制在服务端不需要存储session信息,因为 Token 自身包含了所有登录用户的信息,只需要在客户端的cookie或本地介质存储状态信息. 更适用CDN : 可以通过内容分发网络请求你服务端的所有资料(如:javascript, HTML,图片等),而你的服务端只要提供API即可. 去耦 : 不需要绑定到一个特定的身份验证方案。Token可以在任何地方生成,只要在 你的API被调用的时候,你可以进行Token生成调用即可. 更适用于移动应用 : 当你的客户端是一个原生平台(iOS, Android,windows 8等) 时,Cookie是不被支持的(你需要通过Cookie容器进行处理),这时采用Token认 证机制就会简单得多。 CSRF:因为不再依赖于Cookie,所以你就不需要考虑对CSRF(跨站请求伪造)的防 范。 性能 : 一次网络往返时间(通过数据库查询session信息)总比做一次HMACSHA256 计算 的Token验证和解析要费时得多. 不需要为登录页面做特殊处理: 如果你使用Protractor 做功能测试的时候,不再需要 为登录页面做特殊处理. 基于标准化 :你的API可以采用标准化的 JSON Web Token (JWT). 这个标准已经存在 多个后端库(.NET, Ruby, Java,python, php)和多家公司的支持(如: Firebase,Google, Microsoft) 缺陷在哪?

说了那么多token认证的好处,但他其实并没有想象的那么神,token 也并不是没有问题。

占带宽

正常情况下要比 session_id 更大,需要 消耗更多流量 ,挤占更多带宽,假如你的网站每月有 10 万次的浏览器,就意味着要多开销几十兆的流量。听起来并不多,但日积月累也是不小一笔开销。实际上,许多人会在 JWT 中存储的信息会更多。

无论如何你需要操作数据库

在网站上使用 JWT,对于用户加载的几乎所有页面,都需要从缓存/数据库中加载用户信息,如果对于高流量的服务,你确定这个操作合适么?如果使用redis进行缓存,那么效率上也并不能比 session 更高效

无法在服务端注销,那么久 很难解决劫持 问题

性能问题

JWT 的卖点之一就是加密签名,由于这个特性,接收方得以验证 JWT 是否有效且被信任。但是大多数 Web 身份认证应用中,JWT 都会被存储到 Cookie 中,这就是说你有了两个层面的签名。听着似乎很牛逼,但是没有任何优势,为此,你需要花费两倍的 CPU 开销来验证签名。对于有着严格性能要求的 Web 应用,这并不理想,尤其对于单线程环境。

JWT

现在我们来说说今天的主角, JWT

JSON Web Token(JWT)是一个非常轻巧的规范。这个规范允许我们使用JWT在用 户和服务器之间传递安全可靠的信息


感性认识JWT
组成

一个JWT实际上就是一个字符串,它由三部分组成, 头部 、 载荷 与 签名 。

头部(header)

头部用于描述关于该JWT的最基本的信息,例如其类型以及签名所用的算法等。这也可以 被表示成一个JSON对象。

{ "typ":"JWT", "alg":"HS256" } 复制代码

这就是头部的明文内容,第一部分说明他是一个jwt,第二部分则指出签名算法用的是 HS256算法 。

然后将这个头部进行BASE64编码,编码后形成头部:

eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9 复制代码 载荷(payload)

载荷就是存放有效信息的地方,有效信息包含三个部分:

(1) 标准中注册的声明 (建议但不强制使用)

iss: jwt签发者 sub: jwt所面向的用户 aud: 接收jwt的一方 exp: jwt的过期时间,这个过期时间必须要大于签发时间 nbf: 定义在什么时间之前,该jwt都是不可用的. iat: jwt的签发时间 jti: jwt的唯一身份标识,主要用来作为一次性token,从而回避重放攻击。

(2) 公共的声明 公共的声明可以添加任何的信息,一般添加用户的相关信息或其他业务需要的必要信息. 但不建议添加敏感信息,因为该部分在客户端可解密.

(3)私有的声明

私有声明是提供者和消费者所共同定义的声明,一般不建议存放敏感信息,因为base64 是对称解密的,意味着该部分信息可以归类为明文信息。

{ "sub":"1234567890", "name":"tengshe789", "admin": true } 复制代码

上面就是一个简单的载荷的明文,接下来使用base64加密:

eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiYWRtaW4iOnRydWV9 复制代码 签证(signature)

jwt的第三部分是一个签证信息,这个签证信息由三部分组成:

header (base64后的) payload (base64后的) secret

这个部分需要base64加密后的header和base64加密后的payload使用.连接组成的字符串,然后通过header中声明的加密方式进行加盐secret组合加密,然后就构成了jwt的第 三部分。

TJVA95OrM7E2cBab30RMHrHDcEfxjoYZgeFONFh7HgQ 复制代码 合成 eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6I kpvaG4gRG9lIiwiYWRtaW4iOnRydWV9.TJVA95OrM7E2cBab30RMHrHDcEfxjoYZgeFONFh7Hg Q 复制代码 实现JWT

现在一般实现jwt,都使用Apache 的开源项目JJWT(一个提供端到端的JWT创建和验证的Java库)。

依赖 <!-- https://mvnrepository.com/artifact/io.jsonwebtoken/jjwt --> <dependency> <groupId>io.jsonwebtoken</groupId> <artifactId>jjwt</artifactId> <version>0.7.0</version> </dependency> 复制代码 创建token的demo public class CreateJWT { public static void main(String[] args) throws Exception{ JwtBuilder builder = Jwts.builder().setId("123") .setSubject("jwt所面向的用户") .setIssuedAt(new Date()) .signWith(SignatureAlgorithm.HS256,"tengshe789"); String s = builder.compact(); System.out.println(s); //eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiIxMjMiLCJzdWIiOiJqd3TmiYDpnaLlkJHnmoTnlKjmiLciLCJpYXQiOjE1NDM3NTk0MjJ9.1sIlEynqqZmA4PbKI6GgiP3ljk_aiypcsUxSN6-ATIA } } 复制代码

结果如图:


感性认识JWT

(注意,jjwt不支持jdk11,0.9.1以后的jjwt必须实现signWith()方法才能实现)

解析Token的demo public class ParseJWT { public static void main(String[] args) { String token = "eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiIxMjMiLCJzdWIiOiJqd3TmiYDpnaLlkJHnmoTnlKjmiLciLCJpYXQiOjE1NDM3NTk0MjJ9.1sIlEynqqZmA4PbKI6GgiP3ljk_aiypcsUxSN6-ATIA"; Claims claims = Jwts.parser().setSigningKey("tengshe789").parseClaimsJws(token).getBody(); System.out.println("id"+claims.getId()); System.out.println("Subject"+claims.getSubject()); System.out.println("IssuedAt"+claims.getIssuedAt()); } } 复制代码

结果如图:


感性认识JWT
生产中的JWT

在企业级系统中,通常内部会有非常多的工具平台供大家使用,比如人力资源,代码管理,日志监控,预算申请等等。如果每一个平台都实现自己的用户体系的话无疑是巨大的浪费,所以公司内部会有一套公用的用户体系,用户只要登陆之后,就能够访问所有的系统。

这就是 单点登录(SSO: Single Sign-On)

SSO 是一类解决方案的统称,而在具体的实施方面,一般有两种策略可供选择:

SAML 2.0 OAuth 2.0

欲扬先抑,先说说几个重要的知识点。

Authentication VS Authorisation

Authentication: 身份鉴别,鉴权,以下简称认证

认证的作用在于认可你有权限访问系统,用于鉴别访问者是否是合法用户。负责认证的服务通常称为 Authorization Server 或者 Identity Provider,以下简称 IdP

Authorisation: 授权

授权用于决定你有访问哪些资源的权限。大多数人不会区分这两者的区别,因为站在用户的立场上。而作为系统的设计者来说,这两者是有差别的,这是不同的两个工作职责,我们可以只需要认证功能,而不需要授权功能,甚至不需要自己实现认证功能,而借助 Google 的认证系统,即用户可以用 Google 的账号进行登陆。负责提供资源(API调用)的服务称为 Resource Server 或者 Service Provider,以下简称 SP

SMAL 2.0
感性认识JWT
OAuth(JWT)

OAuth(开放授权)是一个开放的授权标准,允许用户让第三方应用访问该用户在 某一web服务上存储的私密的资源(如照片,视频,联系人列表),而无需将用户名和密码提供给第三方应用。

流程可以参考如下:


感性认识JWT

简单的来说,就是你要访问一个应用服务,先找它要一个 request token (请求令牌),再把这个 request token 发到第三方认证服务器,此时第三方认证服务器会给你一个 aceess token (通行令牌), 有了 aceess token 你就可以使用你的应用服务了。

注意图中第4步兑换 access token 的过程中,很多第三方系统,如Google ,并不会仅仅返回 access token ,还会返回额外的信息,这其中和之后更新相关的就是 refresh token 。一旦 access token 过期,你就可以通过 refresh token 再次请求 access token 。


感性认识JWT

当然了,流程是根据你的请求方式和访问的资源类型而定的,业务很多也是不一样的,我这是简单的聊聊。

现在这种方法比较常见,常见的譬如使用QQ快速登陆,用的基本的都是这种方法。

开源项目

我们用一个很火的开源项目Cloud-Admin为栗子,来分析一下jwt的应用。

Cloud-Admin是基于Spring Cloud微服务化开发平台,具有统一授权、认证后台管理系统,其中包含具备用户管理、资源权限管理、网关API管理等多个模块,支持多业务系统并行开发。

目录结构
感性认识JWT

鉴权中心功能在 ace-auth 与 ace-gate 下。

模型

下面是官方提供的架构模型。


感性认识JWT

可以看到, AuthServer 在架构的中心环节,要访问服务,必须需要鉴权中心的JWT鉴权。

鉴权中心服务端代码解读 实体类

先看实体类,这里鉴权中心定义了一组客户端实体,如下:

@Table(name = "auth_client") @Getter @Setter public class Client { @Id private Integer id; private String code; private String secret; private String name; private String locked = "0"; private String description; @Column(name = "crt_time") private Date crtTime; @Column(name = "crt_user") private String crtUser; @Column(name = "crt_name") private String crtName; @Column(name = "crt_host") private String crtHost; @Column(name = "upd_time") private Date updTime; @Column(name = "upd_user") private String updUser; @Column(name = "upd_name") private String updName; @Column(name = "upd_host") private String updHost; private String attr1; private String attr2; private String attr3; private String attr4; private String attr5; private String attr6; private String attr7; private String attr8; 复制代码

对应数据库:

CREATE TABLE `auth_client` ( `id` int(11) NOT NULL AUTO_INCREMENT, `code` varchar(255) DEFAULT NULL COMMENT '服务编码', `secret` varchar(255) DEFAULT NULL COMMENT '服务密钥', `name` varchar(255) DEFAULT NULL COMMENT '服务名', `locked` char(1) DEFAULT NULL COMMENT '是否锁定', `description` varchar(255) DEFAULT NULL COMMENT '描述', `crt_time` datetime DEFAULT NULL COMMENT '创建时间', `crt_user` varchar(255) DEFAULT NULL COMMENT '创建人', `crt_name` varchar(255) DEFAULT NULL COMMENT '创建人姓名', `crt_host` varchar(255) DEFAULT NULL COMMENT '创建主机', `upd_time` datetime DEFAULT NULL COMMENT '更新时间', `upd_user` varchar(255) DEFAULT NULL COMMENT '更新人', `upd_name` varchar(255) DEFAULT NULL COMMENT '更新姓名', `upd_host` varchar(255) DEFAULT NULL COMMENT '更新主机', `attr1` varchar(255) DEFAULT NULL, `attr2` varchar(255) DEFAULT NULL, `attr3` varchar(255) DEFAULT NULL, `attr4` varchar(255) DEFAULT NULL, `attr5` varchar(255) DEFAULT NULL, `attr6` varchar(255) DEFAULT NULL, `attr7` varchar(255) DEFAULT NULL, `attr8` varchar(255) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=14 DEFAULT CHARSET=utf8mb4; 复制代码

这些是每组微服务客户端的信息

第二个实体类,就是客户端_服务的实体,也就是对应着那些微服务客户端能调用哪些微服务客户端:

大概对应的就是微服务间调用权限关系。

@Table(name = "auth_client_service") public class ClientService { @Id private Integer id; @Column(name = "service_id") private String serviceId; @Column(name = "client_id") private String clientId; private String description; @Column(name = "crt_time") private Date crtTime; @Column(name = "crt_user") private String crtUser; @Column(name = "crt_name") private String crtName; @Column(name = "crt_host") private String crtHost;} 复制代码 接口层

我们跳着看,先看接口层

@RestController @RequestMapping("jwt") @Slf4j public class AuthController { @Value("${jwt.token-header}") private String tokenHeader; @Autowired private AuthService authService; @RequestMapping(value = "token", method = RequestMethod.POST) public ObjectRestResponse<String> createAuthenticationToken( @RequestBody JwtAuthenticationRequest authenticationRequest) throws Exception { log.info(authenticationRequest.getUsername()+" require logging..."); final String token = authService.login(authenticationRequest); return new ObjectRestResponse<>().data(token); } @RequestMapping(value = "refresh", method = RequestMethod.GET) public ObjectRestResponse<String> refreshAndGetAuthenticationToken( HttpServletRequest request) throws Exception { String token = request.getHeader(tokenHeader); String refreshedToken = authService.refresh(token); return new ObjectRestResponse<>().data(refreshedToken); } @RequestMapping(value = "verify", method = RequestMethod.GET) public ObjectRestResponse<?> verify(String token) throws Exception { authService.validate(token); return new ObjectRestResponse<>(); } } 复制代码

这里放出了三个接口

先说第一个接口,创建 token 。

具体逻辑如下: 每一个用户 登陆 进来时,都会进入这个环节。根据request中用户的用户名和密码,利用 feign 客户端的拦截器拦截request,然后使用作者写的 JwtTokenUtil 里面的各种方法取出token中的key和密钥,验证token是否正确,正确则用 authService.login(authenticationRequest); 的方法返回出去一个新的token。

public String login(JwtAuthenticationRequest authenticationRequest) throws Exception { UserInfo info = userService.validate(authenticationRequest); if (!StringUtils.isEmpty(info.getId())) { return jwtTokenUtil.generateToken(new JWTInfo(info.getUsername(), info.getId() + "", info.getName())); } throw new UserInvalidException("用户不存在或账户密码错误!"); } 复制代码

下图是详细逻辑图:


感性认识JWT
鉴权中心客户端代码 入口

作者写了个注解的入口,使用 @EnableAceAuthClient 即自动开启微服务(客户端)的鉴权管理

@Target(ElementType.TYPE) @Retention(RetentionPolicy.RUNTIME) @Import(AutoConfiguration.class) @Documented @Inherited public @interface EnableAceAuthClient { } 复制代码 配置

接着沿着注解的入口看

@Configuration @ComponentScan({"com.github.wxiaoqi.security.auth.client","com.github.wxiaoqi.security.auth.common.event"}) public class AutoConfiguration { @Bean ServiceAuthConfig getServiceAuthConfig(){ return new ServiceAuthConfig(); } @Bean UserAuthConfig getUserAuthConfig(){ return new UserAuthConfig(); } } 复制代码

注解会自动的将客户端的用户token和服务token的关键信息加载到bean中

feigin拦截器

作者重写了 okhttp3 拦截器的方法,每一次微服务客户端请求的token都会被拦截下来,验证服务调用服务的token和用户调用服务的token是否过期,过期则返回新的token

@Override public Response intercept(Chain chain) throws IOException { Request newRequest = null; if (chain.request().url().toString().contains("client/token")) { newRequest = chain.request() .newBuilder() .header(userAuthConfig.getTokenHeader(), BaseContextHandler.getToken()) .build(); } else { newRequest = chain.request() .newBuilder() .header(userAuthConfig.getTokenHeader(), BaseContextHandler.getToken()) .header(serviceAuthConfig.getTokenHeader(), serviceAuthUtil.getClientToken()) .build(); } Response response = chain.proceed(newRequest); if (HttpStatus.FORBIDDEN.value() == response.code()) { if (response.body().string().contains(String.valueOf(CommonConstants.EX_CLIENT_INVALID_CODE))) { log.info("Client Token Expire,Retry to request..."); serviceAuthUtil.refreshClientToken(); newRequest = chain.request() .newBuilder() .header(userAuthConfig.getTokenHeader(), BaseContextHandler.getToken()) .header(serviceAuthConfig.getTokenHeader(), serviceAuthUtil.getClientToken()) .build(); response = chain.proceed(newRequest); } } return response; } 复制代码 spring容器的拦截器

第二道拦截器是来自spring容器的,第一道feign拦截器只是验证了两个token是否过期,但token真实的权限却没验证。接下来就要验证两个token的权限问题了。

服务调用权限代码如下:

@Override public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception { HandlerMethod handlerMethod = (HandlerMethod) handler; // 配置该注解,说明不进行服务拦截 IgnoreClientToken annotation = handlerMethod.getBeanType().getAnnotation(IgnoreClientToken.class); if (annotation == null) { annotation = handlerMethod.getMethodAnnotation(IgnoreClientToken.class); } if(annotation!=null) { return super.preHandle(request, response, handler); } String token = request.getHeader(serviceAuthConfig.getTokenHeader()); IJWTInfo infoFromToken = serviceAuthUtil.getInfoFromToken(token); String uniqueName = infoFromToken.getUniqueName(); for(String client:serviceAuthUtil.getAllowedClient()){ if(client.equals(uniqueName)){ return super.preHandle(request, response, handler); } } throw new ClientForbiddenException("Client is Forbidden!"); } 复制代码

用户权限:

@Override public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception { HandlerMethod handlerMethod = (HandlerMethod) handler; // 配置该注解,说明不进行用户拦截 IgnoreUserToken annotation = handlerMethod.getBeanType().getAnnotation(IgnoreUserToken.class); if (annotation == null) { annotation = handlerMethod.getMethodAnnotation(IgnoreUserToken.class); } if (annotation != null) { return super.preHandle(request, response, handler); } String token = request.getHeader(userAuthConfig.getTokenHeader()); if (StringUtils.isEmpty(token)) { if (request.getCookies() != null) { for (Cookie cookie : request.getCookies()) { if (cookie.getName().equals(userAuthConfig.getTokenHeader())) { token = cookie.getValue(); } } } } IJWTInfo infoFromToken = userAuthUtil.getInfoFromToken(token); BaseContextHandler.setUsername(infoFromToken.getUniqueName()); BaseContextHandler.setName(infoFromToken.getName()); BaseContextHandler.setUserID(infoFromToken.getId()); return super.preHandle(request, response, handler); } @Override public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) throws Exception { BaseContextHandler.remove(); super.afterCompletion(request, response, handler, ex); } 复制代码 spring cloud gateway网关代码

该框架中所有的请求都会走网关服务(ace-gatev2),通过网关,来验证token是否过期异常,验证token是否不存在,验证token是否有权限进行服务。

下面是核心代码:

@Override public Mono<Void> filter(ServerWebExchange serverWebExchange, GatewayFilterChain gatewayFilterChain) { log.info("check token and user permission...."); LinkedHashSet requiredAttribute = serverWebExchange.getRequiredAttribute(ServerWebExchangeUtils.GATEWAY_ORIGINAL_REQUEST_URL_ATTR); ServerHttpRequest request = serverWebExchange.getRequest(); String requestUri = request.getPath().pathWithinApplication().value(); if (requiredAttribute != null) { Iterator<URI> iterator = requiredAttribute.iterator(); while (iterator.hasNext()){ URI next = iterator.next(); if(next.getPath().startsWith(GATE_WAY_PREFIX)){ requestUri = next.getPath().substring(GATE_WAY_PREFIX.length()); } } } final String method = request.getMethod().toString(); BaseContextHandler.setToken(null); ServerHttpRequest.Builder mutate = request.mutate(); // 不进行拦截的地址 if (isStartWith(requestUri)) { ServerHttpRequest build = mutate.build(); return gatewayFilterChain.filter(serverWebExchange.mutate().request(build).build()); } IJWTInfo user = null; try { user = getJWTUser(request, mutate); } catch (Exception e) { log.error("用户Token过期异常", e); return getVoidMono(serverWebExchange, new TokenForbiddenResponse("User Token Forbidden or Expired!")); } List<PermissionInfo> permissionIfs = userService.getAllPermissionInfo(); // 判断资源是否启用权限约束 Stream<PermissionInfo> stream = getPermissionIfs(requestUri, method, permissionIfs); List<PermissionInfo> result = stream.collect(Collectors.toList()); PermissionInfo[] permissions = result.toArray(new PermissionInfo[]{}); if (permissions.length > 0) { if (checkUserPermission(permissions, serverWebExchange, user)) { return getVoidMono(serverWebExchange, new TokenForbiddenResponse("User Forbidden!Does not has Permission!")); } } // 申请客户端密钥头 mutate.header(serviceAuthConfig.getTokenHeader(), serviceAuthUtil.getClientToken()); ServerHttpRequest build = mutate.build(); return gatewayFilterChain.filter(serverWebExchange.mutate().request(build).build()); } 复制代码
感性认识JWT
cloud admin总结

总的来说,鉴权和网关模块就说完了。作者代码构思极其精妙,使用在大型的权限系统中,可以巧妙的减少耦合性,让服务鉴权粒度细化,方便管理。

大学生网络安全能力大赛正式启动报名!

$
0
0

大学生网络安全能力大赛正式启动报名!
活动简介

大学生网络安全能力大赛,是面向国内知名网络安全专业院校学生及爱好互联网安全的青年展开的奖学金大赛。旨在提升高校IT人才对网络安全的重视度,吸引更多的优秀人才进入这一重要的领域,维护国家和人民的利益。

大赛将通过专业的评审团评选出符合大赛标准的优秀作品及人才,并向互联网安全行业输送优质的新安全人才,实现大学生的安全梦。

活动时间

比赛时间:2018年12月3号-2019年2月22号

报名截止:2018年12月3号-2019年1月14号

评分排名:2019年2月22-28号

颁奖典礼:2019年3月,杭州

主办方

阿里安全、阿里巴巴数据安全研究院、阿里安全响应中心ASRC

报名方式

进入ASRC网站( https://security.alibaba.com/ ),登录后在个人资料页面( https://security.alibaba.com/leak/profile.htm )完善所有个人信息。

其中个人团队改成队伍名称及来源高校,如“xx大学-xxx战队”;收货信息的详细地址处,添加专业+学校地址(便于礼物奖品邮寄);再在大赛活动页面( https://security.alibaba.com/online/detail?id=33 )点击“我要报名”,等待审核完成即报名成功。

注:只有所有信息按照要求填写完成,才能顺利通过报名。

活动详情

大赛对象:面向全国各个阶段的学生群体,以数据安全和漏洞挖掘方向为主,重点覆盖专科生、大学生、研究生、博士生等。

参赛形式:团队参赛,每队2-6人,可同校组队或跨学校组队(允许跨区域全国自由组队)。

分赛赛制

大赛赛题分为数据安全和漏洞挖掘。每个成员均需参与两个板块,最终评定取决于个人最高分、团队总分、团队平均分。

1.大数据――数据安全

时间安排:

1)比赛日期:2019.1.229:30-11:30

2)比赛时长:2小时

3)结果公布:2019.2.28

l .比赛规则:1月2日将发放学习资料。数据安全为在线开卷考试,学生需在报名页面点击数据安全赛题链接,用自己的淘宝ID登录阿里巴巴考试系统。满分100分,达80分颁发合格证书;未达标有1次补考机会。

题型设置:共计三种类型题目,题型为单选题、不定向选择题、简答题。

2.新威胁――ASRC漏洞挖掘

时间安排:2018.12.3-2019.2.22

比赛流程:

1)登陆ASRC新官网:( https://asrc.alibaba.com )

2)按照首页规则指示,提交漏洞/情报。题目按照此格式提交【参赛+漏洞标题xx】如果标题不符合格式,将无法记分。

3)7天内审核完成,并发放漏洞奖金(单个漏洞奖金10元―10万+不等)。

题型设计:低危漏洞/情报-1-20分,上限30分;中危漏洞/情报-20-50分,上限60分;高危漏洞/情报-50-80分,上限90分;严重漏洞/情报-90-100分,上限100分。

漏洞评分标准:漏洞贡献值由漏洞对应用的危害程度以及应用的重要程度决定。ASRC将结合利用场景中漏洞的严重程度、利用难度等综合因素给予相应分值的贡献值和漏洞定级。标准详见官网( https://asrc.alibaba.com/#/announcement/45 )。

情报评分标准:情报贡献值由情报对应的危害程度及情报的完整性决定。ASRC将结合具体场景中情报的危害程度、完整度等综合因素给予相应分值的情报等级和贡献值。标准详见官网( https://asrc.alibaba.com/#/announcement/80 )

大赛专属情报入门tips:

1)低危情报:淘宝/支付宝相关退款钓鱼举报,只要能访问就好,ip或者有域名均可。

2)数据安全情报:商家订单上传网盘的信息举报,找到有商家上传网盘导致订单泄漏的例子,将网盘链接上报给我们,我们会根据数据影响程度评级。

大赛奖励

大学生网络安全能力大赛奖项设置为13项团队奖、指导老师奖。

高达8万元的丰富团队现金奖励


大学生网络安全能力大赛正式启动报名!

1.最强战队奖:团队成员的总分相加,全国前三名。

2.最佳合作院校奖:评选参与比赛的队伍数量最多的院校(队伍成绩合格),全国前三名。

3.最佳漏洞情报挖掘奖:团队成员的ASRC漏洞挖掘总分相加,全国前三名。

4.最佳数据安全奖:团队成员的数据安全考试,取最高一人的成绩,全国前三名。

5.全国参与优胜奖:队伍总分相加,全国前20名。要求每位成员分数都合格;最强战队不参与此项奖励。

获奖战队指导老师颁发荣誉证书

阿里巴巴颁发的水晶奖杯和荣誉证书

受邀差旅全包,出席颁奖典礼,和阿里合伙人面对面交流

名师授课直通车机会,参加19年阿里安全专家的课程培训

获得ASRC漏洞挖掘的现金奖励,最高奖金10万+

获得ASRC太阳联盟的高级会员福利

特别声明

1.大赛组委会对比赛规则保留最后解释权;

2. 一经发现作弊,个人重复参队,取消参赛资格;

注:如果您有任何问题,请联系:博雷18668197673、花姝15201092352,欢迎扫码加入钉钉群,一起沟通交流哇!


大学生网络安全能力大赛正式启动报名!

Delphi Indy “SSL routines:SSL23_GET_CLIENT_HELLO:http request” means you ge ...

$
0
0

Delphi Indy “SSL routines:SSL23_GET_CLIENT_HELLO:http request” means you get an http request, but expecting an httpsrequest

Posted by jpluimers on 2018/12/05

A client got this with Delphi Indy “SSL routines:SSL23_GET_CLIENT_HELLO:http request” and was confused.

The message means you get an http request, but are expecting an https request.

If you really want to, you can have one component service both http and https requests, though most of the time you really do not want to: you want to phase out http whenever possible.

Related:[ WayBack ] delphi Can a single TIdHTTPServer component handle http and https request in the same time? Stack Overflow

jeroen

This entry was posted on 2018/12/05 at 18:00 and is filed underDelphi, Development , Indy ,Software Development. You can follow any responses to this entry through theRSS 2.0 feed. You can, ortrackback from your own site.

Off-The-Shelf Hacker: Adding MQTT and Cron to the Lawn Sprinkler Project

$
0
0

This week we’ll continue our journey on building an automated sprinkler system . The project highlights key design and implementation concepts that off-the-shelf hackers will face in the systems they build.

While we could just program on/off times for each individual sprinkler head, directly on a standalone Arduino, a networked approach presents several benefits. Networking permits interaction between devices so they can accomplish things together, perhaps across vast distances. Additionally, network-enabled Arduino clones, such as the NodeMCU boards are nearly as cheap as a plain old non-networked Arduino. Might as well use them. There are also tons of libraries and sample programs you can leverage for your initial bare-bones proof-of-concept sprint. No need to create all the code from scratch.

In our case, we’re using the NodeMCU/relay board as an edge device. It has a limited program and physical space. It is also inexpensive and there are only a few parts, making it pretty rugged for being out in the garage. My rig uses the NodeMCU as a relay controller and receives commands from a “smarter” Raspberry Pi, that will manage the sprinkler scheduling, analysis and interaction with other systems further up the line.

How do you make gadgets talk to each other? I’m using the MQTT messaging protocol. Let’s get a general overview of why and how to use the MQTT for sprinkler control. The code for the NodeMCU/relay board will appear later.

Why Use MQTT?

MQTT is a good communication model for networked physical computing and Internet of Things (IoT) projects because it is simple, reliable and lightweight. It is also mainstream and was designed for an industrial environment. I wrote about usingMQTT on wearables back in January 2018. General installation instructions are in that article. Instead of the conference badge, I installed MQTT onHedley the Skull for this project. Nowadays, the MQTT broker and client applications are a standard component for all my linux devices.

The automated sprinkler system consists of two basic and several optional components. The MQTT broker resides on Hedley, this time as a matter of convenience. At some point, the broker will be moved to a dedicated Raspberry Pi server or up into the cloud. MQTT starts automatically whenever Hedley boots up. The MQTT client resides on the NodeMCU/relay board device and is embedded as part of the Arduino code.

We can also have an optional client on another Linux machine, such as my ASUS notebook. I installed the entire MQTT package, which includes both the client and broker parts. I used the notebook for testing.

The control idea for the sprinkler system is to send data, using MQTT to the NodeMCU and turn a relay on or off. Once the broker is running on Hedley, we can simply send data to a topic, from the notebook, with the following command line.

drtorq-laptop% mosquitto_pub -h 192.168.1.107 -t inTopic -m 1

This particular combination turns on relay #1. The 192.168.1.107 is the local network address of the MQTT broker (on Hedley).

Once functionality is confirmed with manually sending data through the broker to the NodeMCU/relay we can use cron to automate the process.

Trigger the Relays with cron

Cron is a native Linux program for scheduling automated program execution. Traditionally Unix/Linux system management duties, like running nightly data backup processes are managed with cron. Set up a simple text file with your desired program execution time and when that time comes up, the program will run. You can automate just about any task using cron.

For testing, I entered the start and stop times for a couple of sprinkler heads and used the mosquitto_pub command indexing the corresponding NodeMCU/relay with the -m (message) option. Sending a “1” turns on the #1 relay. The “8” key turns on the #8 relay and so on. Sending a “0” turns off ALL the relays. While we could turn on more than one relay at a time, the NodeMCU’s little voltage regulator will only be able to supply enough juice for a couple of relays before overheating. Being aware of power requirements and how they affect the hardware components, is something to keep in mind when designing your gadgets.

For the proof-of-concept, I used cron on my ASUS Linux notebook. You could just as easily set up the cron table on Hedley the Skull, the steampunk conference badge or any other Linux machine on your network. Remember we are running the MQTT broker on Hedley. You can even use the MQTT client on the same machine as the broker if desired. Just open a terminal and type in a mosquitto_pub/mosquitto_sub command. As long as the devices are powered up and connected to the network, everything will work fine.

Save yourself some heartache and use the traditional “crontab -e” command to edit your cron table.

doc-laptop% crontab -e

Make edits to the table, then use a “Ctrl-o” and Ctrl-x” to exit. The cron daemon will automatically restart after you save the file and exit crontab. Your file(s) will never run, without restarting the daemon. Sad to say it took me a long time to figure that last bit out, back in my Unix system admin days. At the time I didn’t have a little crontab -e script.

Here’s my cron table on the Linux notebook.

SHELL=/bin/bash
MAILTO=doc@drtorq.com
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin
#
# Each line contains the date/time followed by the command and any options
# note: the user-name defaults to your login user.
#
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * user-name command to be executed
#
# run the mosquitto_pub commands
#
30 07 * * * mosquitto_pub -h 192.168.1.107 -t inTopic -m 8
30 09 * * * mosquitto_pub -h 192.168.1.107 -t inTopic -m 0
45 09 * * * mosquitto_pub -h 192.168.1.107 -t inTopic -m 2
30 10 * * * mosquitto_pub -h 192.168.1.107 -t inTopic -m 0 The NodeMCU Code

The code for our NodeMCU device is pretty straightforward. I uploaded it to the NodeMCU board, from my Linux notebook using version 1.8.7 of the Arduino IDE.

/*
Basic ESP8266 MQTT example
*/
#include
#include
// Update these with values suitable for your network.
const char* ssid = "my-local-ap";
const char* password = "falala-lala";
const char* mqtt_server = "192.168.1.107";
WiFiClient espClient;
PubSubClient client(espClient);
long lastMsg = 0;
char msg[50];
int value = 0; void setup_wifi() {
delay(10);
// Connect to a WiFi network
Serial.println();
Serial.print("Connecting to ");
Serial.println(ssid);
WiFi.begin(ssid, password);
while (WiFi.status() != WL_CONNECTED) {
delay(500);
Serial.print(".");
}
randomSeed(micros());
Serial.println("");
Serial.println("WiFi connected");
Serial.println("IP address: ");
Serial.println(WiFi.localIP());
} void callback(char* topic, byte* payload, unsigned int length) {
Serial.print("Message arrived [");
Serial.print(topic);
Serial.print("] ");
for (int i = 0; i < length; i++) {
Serial.print((char)payload[i]);
}
Serial.println();
// Switch on the LED if an 1 was received as first character
if ((char)payload[0] == '1') {
digitalWrite(16, LOW);
}
else if((char)payload[0] == '2') {
digitalWrite(5, LOW);
}
else if((char)payload[0] == '3') {
digitalWrite(4, LOW);
}
else if((char)payload[0] == '4') {
digitalWrite(0, LOW);
}
else if((char)payload[0] == '5') {
digitalWrite(2, LOW);
}
else if((char)payload[0] == '6') {
digitalWrite(14, LOW);
}
else if((char)payload[0] == '7') {
digitalWrite(12, LOW);
}
else if((char)payload[0] == '8') {
digitalWrite(13, LOW);
}
else {
digitalWrite(16, HIGH);
digitalWrite(5, HIGH);
digitalWrite(4, HIGH);
digitalWrite(0, HIGH);
digitalWrite(2, HIGH);
digitalWrite(14, HIGH);
digitalWrite(12, HIGH);
digitalWrite(13, HIGH);
}
} void reconnect() {
// Loop until we're reconnected
while (!client.connected()) {
Serial.print("Attempting MQTT connection...");
// Create a random client ID
String clientId = "ESP8266Client-";
clientId += String(random(0xffff), HEX);
// Attempt to connect
if (client.connect(clientId.c_str())) {
Serial.println("connected");
// Once connected, publish an announcement...
client.publish("outTopic", "hello world");
// ... and resubscribe
client.subscribe("inTopic");
} else {
Serial.print("failed, rc=");
Serial.print(client.state());
Serial.println(" try again in 5 seconds");
// Wait 5 seconds before retrying
delay(5000);
}
}
} void setup() {
pinMode(16, OUTPUT); // Initialize the BUILTIN_LED pin as an output
pinMode(5, OUTPUT);
pinMode(4, OUTPUT);
pinMode(0, OUTPUT);
pinMode(2, OUTPUT);
pinMode(14, OUTPUT);
pinMode(12, OUTPUT);
pinMode(13, OUTPUT);
digitalWrite(16, HIGH);
digitalWrite(5, HIGH);
digitalWrite(4, HIGH);
digitalWrite(0, HIGH);
digitalWrite(2, HIGH);
digitalWrite(14, HIGH);
digitalWrite(12, HIGH);
digitalWrite(13, HIGH);
Serial.begin(115200);
setup_wifi();
client.setServer(mqtt_server, 1883);
client.setCallback(callback);
}
void loop() {
if (!client.connected()) {
reconnect();
}
client.loop();
long now = millis();
if (now - lastMsg > 2000) {
lastMsg = now;
++value;
snprintf (msg, 50, "hello world #%ld", value);
Serial.print("Publish message: ");
Serial.println(msg);
client.publish("outTopic", msg);
}
}

We go through the usual initialization, including the networking and MQTT libraries.

The callback section is where all the code/physical action takes place, grabbing messages from the MQTT broker and switching the relays on/off as needed. Notice that I turn all the relays OFF whenever we get a “0.” This helps restrict having only one relay on at a time.

We also turn all the relays OFF during the setup phase. I noticed that whenever the NodeMCU powered up it would turn all the relays ON by default. Just add a little code and the problem was eliminated.

Some serial print statements are visible throughout the code. These could be removed once everything is stable and happy.

What’s Next

In keeping with industry practice, security is an afterthought for this proof-of-concept. Obviously, if this were going to be a commercial system, that topic would need considerable attention. For now, the setup will reside behind my network firewall. If I come home one day and find all the sprinklers running at once, I may have a bigger problem.

It might make sense to add in some default behaviors, like turning off ALL the relays after some set period of time, regardless of the data coming in from the MQTT broker. This would be similar to a “watchdog timer” that resets a processor if there is a major error in a running program. While delivery of MQTT messages are very reliable, we still have to account for missing a message. We certainly don’t want a sprinkler (or all of them) running continuously for 72 hours if the NodeMCU doesn’t get the message or there is an error. Cutting power to the board, relays and solenoids might be a way to go. That’s known as a “dead man’s switch.”

Another idea is to gather some data from the yard, like reading a rain gauge and send it up our physical computing stack chain for analysis. That functionality is easily added to the NodeMCU/relay Arduino code, along with a few sensors and analysis programs up on the Raspberry Pi.

Feature image via Pixabay.

The silent CVE in the heart of Kubernetes apiserver

$
0
0
Dec 5, 2018 by Abraham Ingersoll What’s the big fuss over the latest Kubernetes apiserver vulnerability?

Early on Monday December 3rd, a boulder splashed into the placidly silent Kubernetes security channels. A potentially high severity authentication bypass was disclosed with scant explanation the same day that K8s version 1.13 went golden master. For Kubernetes administrators with PTSD from 2014’s HeartBleed , the CVE blast and its 37-line fix triggered palpitations in anticipation of sleepless patchfests to come.

In this post, we’ll explain the “ verify backend upgrade connection ” commit and the bug’s actual impact. We have also whipped up a proof of concept of the vulnerability , which we could not find elsewhere, in case you want to see if your clusters are affected.


The silent CVE in the heart of Kubernetes apiserver
Explanation of CVE-2018-1002105 root cause

Kubernetes apiserver has the ability to proxy http requests to other kubernetes services, allowing for the K8s API itself to become extensible through its Aggregation Layer . This is the same facility that allows RBAC or namespace-constrained users to magically kubectl exec , kubectl attach , and kubectl port-forward directly from their laptops to pods running within live clusters.

Kubernetes cluster-local authentication model is largely based around full mutual TLS authentication (mTLS) , and the various “microservice” components that make up a live K8s cluster use signed certificates to trust each other. When a “master” apiserver process establishes a connection to an “aggregate” layer, the master uses its certificate to authenticate the connection with this peer. The lower-level peer, aka “aggregate layer”, verifies the certificate and trusts that the apiserver has validated required credentials on the other side of the proxied connection.

Where the palpitations start is that the kubernetes API isn’t just basic HTTPS. To support remote administrative tasks, K8s also allows upgrading apiserver connections to full, live, end-to-end HTTP/2 websockets.

The CVE-2018-1002105 vulnerability comes from the way this websocket upgrade was handled: if the request contained the Connection: Upgrade http header, the master apiserver would forward the request and bridge the live socket to the aggregate. The problem occurs in the event that the websocket connection fails to complete. Prior to the fix, the apiserver could be tricked into assuming the pass-thru connection successfully landed even when it had triggered an error code. From that “half-open” and authenticated websocket state, the connected client could send follow-up HTTP requests to the aggregated endpoint, essentially masquerading itself as the master apiserver.

Because the apiserver has bridged the connections between client and server, this allowed the client to continue to use the connection to make requests, bypassing all security and audit controls on the master. Finding the exploit in logs would be extremely difficult.

How was CVE-2018-1002105 fixed?

The patch causes Kubernetes apiserver to check the result of the proxied connection attempt. If the “aggregated” server responds to the “upgrade” request by successfully switching protocols to the websocket connection, apiserver sends the response and bridges the connection. If there is any other result, the apiserver sends the response from the aggregated server without bridging the connection, preventing the authentication bypass.

What is the potential impact?

There’s a good reason this only misses a perfect ten on the Common Vulnerability Scoring System (CVSS) by two tenths of a point the impact is quite severe given not-uncommon runtime conditions and deployment choices on many production clusters.

Anonymous access to the apiserver is enabled by default in upstream so that basic access to the cluster, such as a load checking health and discovering API endpoints can be completed without requiring authentication. And while the maintainer of your certified Kubernetes service or distribution may have disabled anonymous access or limited anonymous access using RBAC, out-of-the-box upstream components that add K8s API functionality via “aggregates” (eg the scenario of anonymous HTTP requests forwarded amongst distinct components) are extremely well used.

In short, while certain hardened deployments require luck to fully exploit, CVE-2018-1002105 could only be worse if it gave root to all of your machines.

I’ve disabled anonymous Kubernetes API access, am I still affected?

Disabling anonymous access only restricts the bug to be exploitable by authenticated users. If any K8s API user (eg, any kubectl user) is allowed to exec into a pod even restricted to pods within their “namespace” they could exploit CVE-2018-1002105 to exec into any pod handled by the same kubelet (server).

Once they’re within a pod that they don’t normally have access to, they can potentially pivot, using that pod’s access to do other things within the cluster, such as pivot into your cluster control plane, and then use those controller’s credentials to deploy additional privileged pods, change configuration, etc.

In simplest terms, CVE-2018-1002105 allows some level of privilege escalation within the cluster, which depending on somewhat random runtime factors could lead to total control of the cluster.

How do you tell if you are affected?

If you have live kubeconfig (kubectl) credentials, you can simply download and run a CVE-2018-1002105 vulnerability checker created by one of our Kubernetes engineers.

All current mainline Kubernetes releases (latest v1.10.x, 1.11.x, 1.12.x & 1.13.0) now include the fix, and most vendors with long-term support (LTS) policies have cherry-picked the fix into older K8s branches no longer supported by the community. For Gravity users, we published updates to our downloads page the day the exploit was released. You can download the latest version at https://gravitational.com/gravity/download/ .

See our Kubernetes Release Cycles post if you’re curious why a K8s release from less than a year ago is unsupported by the primary Kubernetes maintainers.

What can be done if you’re vulnerable and can’t upgrade your cluster?

Since the initial drop, @liggitt and other leaders in the Kubernetes community have added lots of nitty detail to the issue at https://github.com/kubernetes/kubernetes/issues/71411 . The potential mitigations revolve around disabling anonymous API access, then either removing features or downgrading access permissions for authenticated users. None of the work-arounds look particularly appetizing if you have a diverse set of end-users / workloads: patching and a rolling restart of your apiservers is pretty much required.

Conclusion

Kubernetes is generally a tribute to well designed open-source community driven software, using current security best practices and modern choices around security. Kubernetes is also a large project, which leaves plenty of surface area for exploits to be lingering under the surface. For a project that has grown and expanded so rapidly over the past 4 years, we find it impressive more severe vulnerabilities haven’t been found.

Reflections on being an indie hacker

$
0
0
Introduction

My name is Tigran and by definition, I’m probably a half-indie hacker. Why half you may ask? Because I’m a full-time software engineer at Buffer but at the same time I build an online profitable side-business calledCronhub. If we think how one of my favorite internet sites Indie Hackers defines it I think I’m fitting into that definition but not quite.


Reflections on being an indie hacker
How IndieHackers.com defines indie hacker

I generate money independently through the product I’ve created but also have a primary source of income which is my employer. I’m a solo founder and have been bootstrappingCronhub for the past 8 months or so. As you see I may have the rights to call myself an indie hacker, right? If your answer is yes then let’s read my story further. Also, I have writtenanother article if you like to read more about how I work remotely.

I wanted to be an indie hacker for multiple reasons but the biggest motivation has always been my passion for building products. In the past, I’ve built other side-projects that were free. I even created a side-project when I was in RIT called Wheelie. It became the official ride-sharing online platform for RIT students. However, I’ve shut it down two years ago due losing my interest for the product as well as worrying too much about the safety issues. So yes, I love side-projects because it’s fun and you get to learn a wide range of skills.

Another reason why I started Cronhub is financial income. I understand that money doesn’t necessarily buy happiness but it can buy freedom and I think it’s a big deal at least for me. Not having enough money is always very stressful and making money is usually more fun.

For the past year or so I started to value my time a lot and decided if I ever get involved in side-projects it won’t be for free. Having a full-time job and a family doesn’tgive you too much free time so I better justify what I spend my time on. This thinking really changed my perspective on things that I was keen to work on. This article is the reflection on that journey.

Motivation behind this article

The motivation for writing this article is primarily based on wanting to share my knowledge and experience with others who are thinking to become an indie hacker. When I started this journey I always enjoyed reading other people’s stories, how they came up with an idea, how they ran their businesses and what it was like being an indie hacker.

Unfortunately, there isn’t a universal formula that one can share for building a successful business. Even the word “successful” has a different meaning for different people. One can define the success by the revenue and others can care about other metrics. Thus, my goal is not to give any advice but rather openly share everything I’ve learned and experienced throughout this year so you can make your own conclusions. I also want to encourage other indie hackers to write about their stories because having more data points only helps people who want to get started with building their own products and making money independently.

The Internet has become the most innovative medium to meet like-minded people, read stories and get inspired by them. Inspiration and motivation are two great forces that fuel your mind to achieve your goals and dreams. So I hope I can motivate you even a little bit with this article. If I do, then my time writing this is fully justified.

Launch

Starting my own business and having side-income has been on my mind for a long time. Since I changed my perspective about side-projects I knew that if I was going to dedicate my time to building something it wouldn’t be free. Getting paid for my own products was never about quitting my full-time job. I know many indie hackers whose main motivation is to become independent and not to work for anyone. I can see it. However, I enjoy my current job at Buffer and have no plans to leave it anytime soon. Will I ever work for myself full-time? I don’t know yet.

Coming up with an idea that can turn into a business wasn’t as hard as I imagined. I had a couple of requirements which I used to run my ideas over for evaluation. For each idea I asked the following questions:

Is this the idea for the market I’m familiar with? Is this product solving my own problem? Can I charge for this product? Is this something I’m passionate about?

At the end only two ideas made it to the last step:

An online course on how to build a SaaS product with Laravel and Vue.js Easy cron monitoring tool for developers

I ended up choosing 2) only because I knew it would take me less time to launch the MVP compared to making an online course. I’ve never done any online course before so I knew it would take quite a lot of time to finish it. I told myself I was going to give this idea a try and if it didn’t work out then I was going to step back and focus only on creating educational materials for developers. I knew there would always be a demand for those type of products.

Cron monitoring has always been very tricky and challenging. At Buffer I deal with many cron jobs and need to make sure they run on time and if they fail I want to be aware of. Before Buffer when I was at YCharts I created a custom dashboard for the team to track all internal scheduled jobs. The dashboard would contain the list of the scheduled jobs and some logs.

However, the way we would know whether the jobs ran or not was by looking at the internal dashboard. This meant that we had to check the dashboard every single day to make sure all the checks have passed. This wasn’t ideal. When I talked to other developers I realized that this pattern repeats in many engineering teams. So this was a big signal for me of an existing problem. So I decided to build a product that makes it a breeze to monitor cron jobs. If I could build it I could use it for my side-projects and Buffer.

After working on the first version of the product for almost 2 months (part-time) I launched Cronhub on Product Hunt on March 20, 2018 . The reaction of the PH community was quite positive and this set the beginning of my indie hacker journey.


Reflections on being an indie hacker
Cronhub’s listing on ProductHunt

Launching a new product is a great milestone to hit but what comes after is probably what most people struggle with. Growing your product and finding a product-market fit is a big challenge especially for the first time founders.

Growing and attracting users

Trying to grow a business on the side comes with many challenges. Obviously, time is the biggest constraint but figuring out when to work is another one that most founders face.Early on when you don’t have many users or customers it’s really hard to rely on data and make data-driven decisions. So the only option left is either seek for advice from other founders or follow your own intuition.

Most of the product based decisions early on were based on my own intuition. Since I was building Cronhub for myself I knew exactly the features I needed to focus on. Being your own user is a big advantage and I strongly believe the idea of solv

BUF早餐铺 | 国家级网络攻击行动利用Adobe Flash 0day漏洞;英国电信将剥离华为4G设备 ...

$
0
0

各位 Buffer 早上好,今天是 2018 年 12 月 6 日星期四,农历十月二十九。今天的早餐铺内容有: 研究人员发现使用Adobe Flash 0day漏洞的国家级网 络攻击行动; 研究人员发现新的类 Spectre 攻击 SplitSpectre; 谷歌修复安卓系统中11个严重RCE漏洞; 美国共和党众议院全国委员会调查邮件泄露事件 ; 国电信将剥离华为4G设备并禁止其竞标核心5G设备; 工信部发布2018年第三季度网络安全威胁态势分析与工作综述。


BUF早餐铺 | 国家级网络攻击行动利用Adobe Flash 0day漏洞;英国电信将剥离华为4G设备 ...

以下请看详细内容:

研究人员发现使用Adobe Flash 0day漏洞的国家级网络攻击行动
BUF早餐铺 | 国家级网络攻击行动利用Adobe Flash 0day漏洞;英国电信将剥离华为4G设备 ...
360高级威胁应对团队在全球范围内第一时间发现一起针对俄罗斯的APT攻击行动,通过一份俄文内容的医院员工问卷文档,携带最新的Flash 0day漏洞和具有自毁功能的专属木马程序,目标直指俄罗斯联邦总统事务管理局专属的医疗机构。360已第一时间将0day漏洞的细节报告了Adobe官方,Adobe官方及时响应在12月5日加急发布了新的Flash 32.0.0.101版本修复了此次的0day漏洞。 [来源:360安全卫士]

研究人员发现新的类 Spectre 攻击 SplitSpectre

东北大学和 IBM Research 的研究人员发现了 Spectre CPU 漏洞的一种新变种,能通过基于浏览器的代码利用。被称为 SplitSpectre 的新漏洞与其它 Spectre 变种的一个重要区别是它更容易利用。研究人员使用 Firefox 的 javascript 引擎 SpiderMonkey 52.7.4 对英特尔的 Haswell 和 Skylake 处理器,以及 AMD 的 Ryzen 处理器成功执行了 SplitSpectre 攻击。不过用户无需担心,因为现有的 Spectre 缓解方法也能阻止 SplitSpectre 攻击。研究报告发表在 IBM Research 网站上。[来源: solidot ] 谷歌修复安卓系统中11个严重RCE漏洞
BUF早餐铺 | 国家级网络攻击行动利用Adobe Flash 0day漏洞;英国电信将剥离华为4G设备 ...
12月,谷歌共修复53个安卓漏洞,其中11个为严重RCE漏洞。在11个严重漏洞中,有6个与安卓操作系统的媒体框架和系统组件有关。有四个RCE漏洞(CVE-2018-9549,CVE-2018-9550,CVE-2018-9551,CVE-2018-9552)影响到Android7.0到9.0版本中的开源项目操作系统版本。谷歌表示,目前这些漏洞还没有在野利用的报道,谷歌的Pixel和Nexus设备以及三星,LG,HTC等旗舰Android手机都可及时下载补丁并更新。 而其他设备制造商和移动运营商也将陆续更新。[来源: threatpost ] 美国共和党众议院全国委员会调查邮件泄露事件 2018年4月,美国共和党众议院全国委员会(NRCC)发现系统遭遇入侵,未经授权的第三方可以访问四名高级助手的电子邮件帐户,进而访问NRCC系统。随后,他们联系了FBI和安全咨询公司CrowdStrike进行内部调查。但直到目前,依旧不清楚NRCC是如何被黑客攻击的,但愿透露姓名的高级政党官员认为,四名高级助手的电子邮件帐户都遭到监控,并向入侵者发送了数千封电子邮件。NRCC表示,事件还在进一步调查。[来源: bleepingcomputer ] 英国电信将剥离华为4G设备并禁止其竞标核心5G设备
BUF早餐铺 | 国家级网络攻击行动利用Adobe Flash 0day漏洞;英国电信将剥离华为4G设备 ...
据英国《金融时报》报道,英国电信(16.76, 0.39, 2.38%)(British Telecom)将在未来两年内从其核心4G网络中彻底移除华为设备,以确保公司的手机业务符合内部政策。该政策旨在在电信基础设施中边缘化华为设备。同时,英国电信亦禁止华为参与竞标公司核心5G网络设备的供应合同。但在相对次要的网络设施中,公司仍会使用华为的组件。英国电信等电信供应商在过去十多年来,已经将大部分华为设备从他们的核心网络设施中移除,通常这些设施保存有敏感信息,如用户活动和个人数据等。[来源:新浪财经] 工信部发布2018年第三季度网络安全威胁态势分析与工作综述

工信部发布2018年第三季度网络安全威胁态势分析与工作综述的公告。公告显示,第三季度公共互联网网络安全形势依然严峻,发生多起严重危害用户合法权益的网络安全事件。其中,用户数据泄露事件多有发生、云计算平台相继发生故障。今年三季度全行业共处置网络安全威胁约3397万个,包括恶意IP地址、恶意域名等恶意网络资源约653万个,木马、僵尸程序、病毒等恶意程序约2611万个,网络安全漏洞等安全隐患约4.8万,主机受控、数据泄露、网页篡改等安全事件约127万个,其他网络安全威胁约1万个。

工信部同时对下一步工作作出部署,除了完成相关既定工作外,工信部将联合多企业、多单位开展移动恶意程序专项治理工作。及时发现和消除移动恶意程序等网络安全威胁,维护广大网络用户的合法权益。 [来源: 工信部 ]

Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

$
0
0

Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析
背景

360威胁情报中心在2018年11月29日捕获到两例使用Flash 0day漏洞配合微软Office Word文档发起的APT攻击案例,攻击目标疑似乌克兰。这是360威胁情报中心本年度第二次发现在野0day漏洞攻击。攻击者将包含Flash 0day漏洞的Word诱饵文档发送给目标,一旦用户打开该Word文档便会触发漏洞并执行后续的木马程序,从而导致电脑被控制。360威胁情报中心在确认漏洞以后第一时间通知了厂商Adobe,Adobe在今日发布的安全通告中致谢了360威胁情报中心。


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

Adobe反馈确认漏洞存在并公开致谢

整个漏洞攻击过程非常巧妙:攻击者将Flash 0day漏洞利用文件插入到Word诱饵文档中,并将诱饵文档和一个图片格式的压缩包(JPG+RAR)打包在一个RAR压缩包中发送给目标。目标用户解压压缩包后打开Word文档触发Flash 0day漏洞利用,漏洞利用代码会将同目录下的JPG图片(同时也是一个RAR压缩包)内压缩保存的木马程序解压执行,通过该利用技巧可以躲避大部分杀毒软件的查杀。360威胁情报中心通过对木马样本进行详细分析,发现本次使用的木马程序为Hacking Team 2015年泄露的远程控制软件的升级版!相关数字攻击武器与Hacking Team有很强的关联性,且使用相同数字签名的Hacking Team木马最早出现在2018年8月。

由于此漏洞及相应的攻击代码极有可能被黑产和其他APT团伙改造以后利用来执行大规模的攻击,构成现实的威胁,因此,360威胁情报中心提醒用户采取应对措施。

事件时间线

时间

内容

2018年11月29日

360威胁情报中心发现定向攻击样本线索

2018年11月30日

发现并确认Flash 0day漏洞的存在并上报Adobe

2018年12月03日

厂商Adobe确认漏洞的存在

2018年12月05日

360威胁情报中心发布分析报告

相关漏洞概要

漏洞名称

Adobe Flash Player远程代码执行漏洞

威胁类型

远程代码执行

威胁等级

漏洞ID

CVE-2018-15982

利用场景

攻击者通过网页下载、电子邮件、即时通讯等渠道向受害者发送恶意构造的Office文件诱使其打开处理,可能触发漏洞在用户系统上执行任意指令获取控制。

受影响系统及应用版本

Adobe Flash Player(31.0.0.153及更早的版本)

不受影响影响系统及应用版本

Adobe Flash Player 32.0.0.101(修复后的最新版本)

修复及升级地址

https://get.adobe.com/flashplayer/

样本概况

从捕获到的样本相关信息推测判断这是一起针对乌克兰地区的APT攻击。样本于11月29日被上传到VirusTotal以后的几天内只有很少的杀毒软件能够检出, 360威胁情报中心通过细致的分析发现了其中包含的0day漏洞利用。

被捕获的其中一个Word文档在VirusTotal上的查杀情况如下:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析
攻击过程分析

通过对样本执行过程的跟踪记录,我们还原的样本整体执行流程如下:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

包含Flash 0day的恶意文档整体执行流程

诱饵文档和图片格式的压缩包

攻击者疑似首先向相关人员发送了一个压缩包文件,该压缩包含一个利用Flash 0day漏洞的Word文档和一张看起来有正常内容的JPG图片,并诱骗受害者解压后打开Word文档:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

而scan042.jpg图片实际上是一个JPG图片格式的RAR压缩包,文件头部具有JPG文件头特征,而内部包含一个RAR压缩包。由于RAR识别文件格式的宽松特性,所以该文件既可以被当做JPG图片解析,也可以当做RAR压缩包处理:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

JPEG文件头


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

内置RAR压缩包

诱饵文档为俄语内容,是一份工作人员的调查问卷,打开后会提示是否播放内置的Falsh,一旦用户允许播放Flash,便会触发0day漏洞攻击:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析
Flash 0day漏洞对象

该诱饵文档在页眉中插入了一个Flash 0day漏洞利用对象:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

提取的Flash 0day漏洞利用文件如下:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

Flash文件中包含的ShellCode:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析
ShellCode

Flash 0day漏洞利用成功后执行的ShellCode会动态获取函数地址,随后调用RtlCaptureContext获得当前的栈信息,然后从栈中搜索0xDEC0ADDE、0xFFEFCDAB标志,此标志后的数据为CreateProcess函数需要使用的参数,最后调用CreateProcess函数创建进程执行命令:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

动态获取函数地址:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析
Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

搜索CreateProcess函数需要使用的参数:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析
Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

调用CreateProcess函数执行命令:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析
通过Flash 0day漏洞执行命令

漏洞利用成功后执行的ShellCode最终会执行以下命令:

cmd.exe /c set path=%ProgramFiles(x86)%\WinRAR;C:\Program Files\WinRAR; && cd /d %~dp0 & rar.exe e -o+ -r -inul *.rar scan042.jpg & rar.exe e -o+ -r -inul scan042.jpg backup.exe & backup.exe

该命令行的最终目的是将当前文档路径下的scan042.jpg文件使用WinRAR解压后并执行其中的backup.exe,从而完成对目标用户电脑的控制:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析
Flash 0day漏洞分析

360威胁情报中心对该漏洞产生的原因及利用方法进行了详细分析,过程如下:

漏洞分析 释放后重用漏洞(UAF)

反编译提取的漏洞SWF文件如下所示,Exploit中的代码没有经过任何混淆:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

经过分析可以发现,漏洞和今年年初Group 123组织使用的Flash 0day CVE-2018-4878十分类似,CVE-2018-4878 是由于Flash om.adobe.tvsdk包中的DRMManager导致,而该漏洞却和com.adobe.tvsdk中的Metadata有关。

SWF样本一开始定义了三个Vector(主要用于抢占释放的内存空间,其中Var15,Var16分别在32位和64位版本中使用):


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

进入Var17函数后,该函数一开始进行了一些常规的SPRAY,然后申明了一个名为Metadata的对象。Metadata类似于一个map:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

Metadata为Flash提供的SDK中的类,其支持的方法如下所示:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

漏洞触发的关键代码如下,通过setObject向Metadata中存储ByteArray对象,并设置对应的key:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

然后调用Var19(),该函数会导致Flash中GC垃圾回收器调用,从而使导致Meatdata被释放:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

随后调用的keySet会根据设置的key返回对应的Array,并赋值给_local_6,setObject函数的定义如下所示:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

KeySet函数如下所示:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

Metadata中的array被释放后,此处直接通过Var14遍历赋值抢占对应的内存,抢占的对象为Class5:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

Class5定义如下所示:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

最后遍历_local_6,找到对应被释放之后被Class5抢占的对象,判断的标准是Class5中的24,之后通过对象在内存中的不同来判断运行的系统是32位还是64位。而再次调用Var19函数将导致之前的Class5对象内存再次被释放,由于Var14这个Vector中保存了对该Class5对象的引用,最终根据系统版本位数进入对应的利用流程:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

进入函数Var56后,由于之前的Var14 Vector中的某个Class5对象已经释放,此处同样通过给Var15 Vector遍历赋值来抢占这个释放掉的Class5对象,此处使用的是Class3对象:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

Class3如下所示,其内部定义了一个Class1,最终由Class1完成占位:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

可以看到Class1对象的定义如下,此时由于Var14和V15中都存在对最初Class5内存的引用,而Var14和V15中对该内存引用的对象分别是Class5,Class3,从而导致类型混淆:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

由于Class3,Class5是经过攻击者精心设计的,此时只需操作Var14,Var15中的引用对象即可以获得任意地址读写的能力:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

获取任意地址读写后便可以开始搜索内存,获取后续使用的函数地址,之后的流程便和一般的Flash漏洞利用一致:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析
木马分析 backup.exe

后续执行的木马程序使用VMProtect加壳,样本相关信息如下:

MD5

1CBC626ABBE10A4FAE6ABF0F405C35E2

文件名

backup.exe

数字签名

IKB SERVICE UK LTD

加壳信息

VMProtect v3.00 3.1.2 2003-2018

伪装NVIDIA显卡控制程序

木马伪装成了NVIDIA的控制面板程序,并有正常的数字签名,不过该数字签名的证书已被吊销:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

NVIDIA Control Panel Application


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

证书信息

木马程序中还会模仿正常的NVIDIA控制面板程序发送DirectX相关的调试信息:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

有趣的是木马作者拼写单词Producer时发生了拼写错误,将Producer拼写成了Producet:

DXGI WARNING: Live Producet at 0x%08x Refcount: 2. [STATE_CREATION WARNING #0: ] 通过特定窗口过程执行木马功能

通过对VMProtect加密后的代码分析发现,木马运行后会首先创建一个名为“DXGESZ1Dispatcher”的窗口类,该窗口类对应的窗口过程函数就是整个木马的主要控制流程函数,木马程序的主要功能将通过分发窗口消息的方式来驱动执行:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

当CreateWindowExW被调用时,会向窗口过程函数发送WM_CREATE消息,当窗口过程函数收到WM_CREATE消息时会创建3个线程,分别进行环境检测、用户是否有输入操作检测等来检测程序是否在真实环境下运行:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

当检测通过后,会继续向窗口过程发送WM_USER+1消息,进一步控制程序的运行流程。当窗口过程函数收到该消息后,会再创建一个线程来初始化SHLWAPI.DLL和WS_32.DLL里需要使用的API函数:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析
紧接着利用OutputDebugStringA输出一个伪装的Debug信息:“DXGI WARNING: Live Producet at 0x%08x Refcount: 2. [STATE_CREATION WARNING #0: ]”,该信息是正常程序在使用了DirectX编程接口后可能会输出的调试信息,木马程序以此来进一步伪装NVIDIA控制面板程序:
Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

另外还会判断当前的进程ID是否为4,如果是则结束当前进程运行,该技巧一般被用于检测杀毒软件的虚拟机:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析
检测杀毒软件

该木马程序还会通过一些技巧判断当前计算机是否安装某些特定的杀毒软件,比如检测驱动目录里是否存在avckf.sys,而avckf.sys正是BitDefender杀毒软件特有的驱动模块:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

以及通过WMI执行命令“Select*from Win32_Service WhereName=’WinDefend’ AND StateLIKE ‘Running’ ”来确定是否有windows Defender正在运行:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析
持久化

木马程序会将自身拷贝到%APPDATA%\ NVIDIAControlPanel\NVIDIAControlPanel.exe:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

然后通过发送窗口消息的方式触发主线程设置计划任务来实现持久化:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析
上线并加密上传本机信息

当木马窗口过程收到消息类型为WM_USER的消息时,木马会创建一个线程用于获取本机的进程信息、CPU信息、用户信息、计算机所在时区信息等,并把获取的信息加密后通过HTTP协议上传到C&C地址:188.241.58.68,然后等待获取新的指令进行远程控制:

从注册表获取已安装软件:
Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析
执行命令SELECT*FROM Win32_TimeZone获取时区:
Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析
获取磁盘信息:
Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析
连接C&C地址:188.241.58.68,并上传本机信息:
Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析
Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析
溯源与关联 Hacking Team

360威胁情报中心通过对木马样本进行详细分析,发现本次使用的木马程序为Hacking Team 2015年泄露的远程控制软件的升级版!本次的木马程序将字符串进行了加密处理,且增加了使用窗口消息来对木马流程进行控制等功能。

与Hacking Team泄露源码的关联

由于后门程序backup.exe使用VMProtect加密的影响,我们无法截取到比较完美的IDA F5伪代码,但我们确定其绝大多数的功能代码和逻辑都与Hacking Team之前泄露的源码一致,下面是我们展示的部分IDA F5伪代码与Hacking Team泄露源码的对比:


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

检测沙箱


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

初始化WINHTTP


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

关闭WINHTTP HANDLE

同源样本关联

360威胁情报中心通过本次0day漏洞攻击使用的木马程序还关联到两个同类样本,两个木马程序使用了相同的数字签名,木马功能也基本一致,同样是来源于Hacking Team的远控木马,且使用相同数字签名的Hacking Team木马最早出现在今年8月。

其中一个木马程序同样是伪装成NVIDIA控制面板程序,C&C地址为:80.211.217.149,另外一个木马程序则伪装成Microsoft OneDrive程序,C&C地址为:188.166.92.212


Flash 0day + Hacking Team远控:利用最新Flash 0day漏洞的攻击活动与关联分析

伪装成Microsoft OneDrive程序的Hacking Team远控

关于Hacking Team

360威胁情报中心结合多方面的关联,列举本次0day攻击事件和历史Hacking Team之间的一些对比:

本次0day漏洞的Exploit执行的后续木马为Hacking Team泄露的远程控制软件的升级版 在过去Hacking Team泄露资料中表明其对Flash 0day漏洞和利用技术有深厚的基础;而本次0day漏洞中的利用手法实现也是非常通用 Hacking Team长期向多个情报机构或政府部门销售其网络间谍武器 总结

至此,360威胁情报中心通过捕获到的0day漏洞利用样本和后续执行的木马程序关联到Hacking Team组织,自Hacking Team泄露事件以来,其新的相关活动及其开发的间谍木马也被国外安全厂商和资讯网站多次披露,证明其并没有完全销声匿迹。

防护建议

360威胁情报中心提醒各单位/企业用户,谨慎打开来源不明的文档,并尽快通过修复及升级地址下载安装最新版Adobe Flash Player,也可以安装360安全卫士/天擎等防病毒软件工具以尽可能降低风险。

参考

补丁公告: https://helpx.adobe.com/security/products/flash-player/apsb18-42.html 修复及升级地址: https://get.adobe.com/flashplayer/ 360威胁情报中心早期发现的疑似Hacking Team的Flash 0day漏洞分析: https://ti.360.net/blog/articles/cve-2018-5002-flash-0day-with-apt-campaign/ Hacking Team泄露源码: https://github.com/hackedteam/scout-win https://www.welivesecurity.com/2018/03/09/new-traces-hacking-team-wild/ IOC

Word文档

9c65fa48d29e8a0eb1ad80b10b3d9603

92b1c50c3ddf8289e85cbb7f8eead077

Word文档作者信息

tvkisdsy

Кирдакова

Flash 0day漏洞利用文件

8A64017953D0840323318BC224BAB9C7

Flash 0day漏洞利用文件编译时间

Sep 15, 2014

Hacking Team后门程序

1cbc626abbe10a4fae6abf0f405c35e2

7d92dd6e2bff590437dad2cfa221d976

f49da7c983fe65ba301695188006d979

C&C地址

188.241.58.68:80

188.166.92.212:80

80.211.217.149:80

Hacking Team使用的数字签名

Name: IKB SERVICE UK LTD

Serial number: 57 5f c1 c6 bc 47 f3 cf ab 90 0c 6b c1 8a ef 6d

Thumbprint: d8 7a a2 1d ab 22 c2 f1 23 26 0b 1d 7a 31 89 3c 75 66 b0 89

Google to Amazon: We’ll See Your Security Hub and Raise You a Command Centre

$
0
0

Google to Amazon: We’ll See Your Security Hub and Raise You a Command Centre
Google to Amazon: We’ll See Your Security Hub and Raise You a Command Centre
Add to favorites

Google Cloud releases new centralised security database

Dominant cloud provider Amazon Web Services (AWS)’s launch of the AWS Security Hub was among its headline announcements at last week’s re:Invent summit.

The hub aggregates and automatically prioritises security alerts and findings acrossendpoint protection, compliance scanners and more.

Days later Google Cloud wants the market to know that it also launched a “Cloud Security Command Centre” (or “Cloud SCC shall we call it a hub?) and (curse you, AWS) was “the first major cloud provider to offer organization-level visibility into assets, vulnerabilities, and threats” with its alpha launch of the tool in March 2018.

Google Cloud Security Command Centre: A Hub for Improved Visibility, Action

The hub allows users toview which Cloud Storage buckets are publicly accessible, identify VMs with public addresses, discover overly permissive firewall rules, and be alerted to instances that may have been compromised to perform coin mining.

“With this tool, security teams can answer questions like ‘Which cloud storage buckets contain PII?’, ‘Do I have any buckets that are open to the Internet?’ and ‘Which cloud applications are vulnerable to XSS vulnerabilities?'” Google Cloud said.

Users can also see if users outside of your designated domain, or GCP organization, have access to your resources. It alsointegrates with third-party cloud security solutions from vendors such as Cavirin, Chef, and Redlock.

This is a growing request for cloud users, who increasingly face the challenge of identifying precisely where it is that their perimeter and a cloud’s begin and end; with the notion of “shared responsibility” making nobody entirely happy. Meanwhile, having a single dashboard for all cloud security tools and some on-prem ones too is a winner.

“By integrating partner solutions with Cloud Security Command Center, you can get a comprehensive view of risks and threats all in one place without having to go to separate consoles” saidAndy Chang, a senior product manager at Google Cloud in a blog shared Wednesday.

“It includes expanded coverage across GCP services including Cloud Datastore, Cloud DNS, Cloud Load Balancing, Cloud Spanner, Container Registry, Kubernetes Engine, and Virtual Private Cloud;13 IAM roles added for fine grained access control across Cloud SCC; expanded client libraries including Java, Node, and Go; and self-serve partner security sources, such as Cavirin, Chef, and Redlock, via GCP Marketplace.”


3 Ways CISOs Can Boost Their Credibility Within the Enterprise

$
0
0

Security Boulevard Exclusive Series: What I Learned About Being a CISO After I Stopped Being a CISO

In this series we’re talking with former CISOs to collect the lessons they’ve learned about the job after they left―either to work as start-up founders, consultants or vendor executives. The goal is to take the wisdom they’ve gained from broader exposure to other security and business leaders and deliver those lessons back to CISOs who are still in the hot seat. We hope the current crop of CISOs can take some insight from their former compatriots and use it to up their game while they’re still on the job. Read more about the serieshere.

Recent Articles By Author

2018 Sees API Breaches Surge With No Relief in Sight ‘Tis the Season―for a Phishing Frenzy 3 Ways CISOs Can Pump Up Their Political Prowess

Lessons From Guy Bejerano, Co-founder and CEO of Safebreach

One of the biggest challenges CISOs and CSOs face today is that they’re tasked with ensuring the very important outcome of protecting business assets without being handed the authority or organizational ownership to fully assure that outcome.

“This challenge can be frustrating,” said Guy Bejerano, a security veteran with tons of past practitioner experience.

Bejerano started his security career leading information security and red team operations in the Israeli Air Force and then moved over to the private sector as CISO of Ness Technologies and later CSO of LivePerson.

“I had an opportunity to build security teams and security organizations from the ground up for about three and four companies,” he explained. “Different verticals, different areas.”

Nowadays he’s the CEO of SafeBreach, a breach and attack simulation platform company that he co-founded in 2014 to help enterprises validate their security controls.

Since moving into the vendor space, he said his opinions on security haven’t changed drastically, but they have been reemphasized and enhanced by viewing problems from a different angle.

Since he moved out of the CISO role he’s increasingly been convinced that these security leaders must do a couple of key things to become more effective at reducing risks, gain more credibility within their organizations and really take the reins to control their destiny as security executives.

Cutting Through Vendor FUD is Crucial

The fear, uncertainty, and doubt (FUD) that security vendors peddle has been a longtime thorn in the side of CISOs, but Bejerano thinks it’s grown worse than ever.

“Vendors throw FUD at CISOs all the time trying to promote their products through the fear of the worst that will happen,” he said. “You hear lots of talk about zero-days, APTs and the unknown―but it’s more confusing than helping.”

Cutting through the FUD is crucial to CISO success for two major reasons. First, because when FUD drives security strategy, it often distracts the CISO from objectives that should be set by business priorities instead. That’s a big mistake as the profile of the CISO grows in the enterprise. Bejerano said that in the four years since he left the job, CISOs are getting more exposure to the board.

“There’s a lot more expectation from them to drive the entire risk equation in the organization and the budget around security is going up, so there’s an opportunity to change things,” he said. “You have that on one hand and on the other hand there’s a lot of vendor fatigue from CISOs.”

When CISOs let vendor FUD drive their strategy, it hurts their credibility within the business.

That leads us to the second reason why CISOs need a good BS meter when it comes to FUD: In a lot of cases the hysteria is masking some inadequacy of the product being marketed.

“We see it over and over again that there’s a huge difference between how these vendors position their products and what’s going on in reality,” he said.

This leads to poor-performing products and no accountability―another credibility killer for CISOs and their security teams.

As he explained, the CISOs he works with who he admires the most and who are most successful in their organizations are the ones who find meaningful ways to cut through the hype and make sure the vendors they pick fulfill their promises. This is step one to ensure these leaders have credibility when they step up in front of CEOs and boards to ask for money, support and so on.

Data-Driven Discussions Get Things Done at the Board Level

Which leads us to Bejerano’s next important lesson. To gain the kind of authority within an organization necessary to effect meaningful security change, CISOs have got to find better ways to gain influence at higher levels of the business, he said.

With perspective away from the job, he believes one of the key ways to do that is to let metrics, KPIs and other important data drive the discussions that CISOs have with business executives.

“Being more data-driven, more predictable and building KPIs that are business-centric is super critical,” he said. “CISOs need to be much more like a CFO. They need to show ROI from all the investments they make in technology, they need to fully understand the risk exposure of the organization, and be able to show security efficiency over time.”

This means finding ways to answer questions such as how well security investments are doing over time, measuring the reduction or increase of risk as a result of the introduction of new technologies or processes, and so on.

It’s Important to Take an Adversary’s Point of View

Bejerano admitted that, like a lot of CISOs today, he used to view cybersecurity world “from a very defensive position.”

As he explained, it’s hard to flip that lens around and view an enterprise’s position from the adversary’s perspective. But he increasingly believes it is important to do so.

“It’s not easy to look at the offensive side of the fence because hiring people today with a red team skill set or hacking skill set is not easy―it’s not easy to hire or to retain,” he said.

However, he believes the best CISOs focus on ensuring that they’re probing their technology the way attackers do and that they’re challenging defensive assumptions they may have made in the past to ensure that it fits into today’s threat realities.

“The first time a lot of CISOs find out whether their assumptions are right or not are when an attacker comes at them,” he said. “My first advice is don’t wait―challenge yourself, challenge your assumptions on a daily and continuous basis.”

Read the previous article in this serieshere and more about the serieshere.

Is A Cybersecurity Degree Worth It?

$
0
0
Ready to learn Cybersecurity?Browse courseslike Cyber Security for the IoT developed by industry thought leaders and Experfy in Harvard Innovation Lab.

Now that we have solidly entered the Information Age, one thing is very clear: technology is expanding and isn’t going to go away. More and more of our personal information is stored online, and while that can be convenient and helpful, it also comes with some risks. Cybercriminals have become more and more common over the years, stealing valuable information like credit card and social security numbers. With all the data breaches coming to light, cybersecurity is becoming one of the most in-demand fields in the tech industry.

Because the field is growing and the demand for people with a cybersecurity degree is going up, many Americans are considering breaking into the cybersecurity field. Since many jobs don’t require a degree, it’s fair to ask if a cybersecurity degree is worth it. Do you need to spend the time and is a cybersecurity degree worth it?

What is a Cybersecurity Degree?

Cybersecurity degrees are growing in popularity as cyber threats become more and more common. Like all degrees, students will learn how to protect networks from attacks and prepare them for a job in the public or private sector. Many jobs requiring a cybersecurity degree are within government institutions or in healthcare, where cyberattacks are common and devastating. Yes, if your goal is work in technology or gor the government, a cybersecurity degree is worth it.

There are cybersecurity degree programs at different levels, with colleges and universities offering everything from 2-year degrees to doctorates. Many of these programs can be completed online or in a hybrid model combining online work and in-person meetings. Sometimes, cybersecurity may be a specialization within a larger program instead of a standalone option. Getting a degree in cybersecurity can prepare students for the following positions :

Cyber Security Analyst Cyber Security Engineer Cyber Security Specialist Cyber Security Architects

Each employer has their own educational requirements, so it’s a good idea to try to figure out what kinds of organizations you’d like to work with before you make a decision about applying for a cybersecurity degree program.

Cybersecurity Educational Options

One of the best things about getting a cybersecurity degree is that it gives you options. Having advanced computer skills and knowledge in cognitive computing can help you excel in many areas of business and may give you opportunities in other related fields and niches.

Niches that use cybersecurity education include computer science, computer engineering, information tech, and even criminal justice. There is a lot of crossover from other disciplines once you learn the principles of cybersecurity, networks, and automation.

Make Your Cyber Security Degree Worthwhile

Is a cybersecurity degree worth it? It all depends on your future plans. Before you spend the money and time pursuing an education in cybersecurity, it’s important to have a plan. It’s also key to make sure cybersecurity is something you’ll find interesting. You don’t want to find yourself bored after a year or two while getting your cybersecurity degree.

Ask yourself a few questions. What do you want your long-term career to be, and in what areas can you apply a cybersecurity degree? Do you see yourself protecting the government’s networks, keeping healthcare robots from being hacked, or figuring out new ways to keep hackers from breaking into sensitive databases? Many of these industries are growing, with the healthcare robotics market alone expected to grow to $2.8 billion by 2021. Up-and-comine niches can easily make a cybersecurity degree worth it. Knowing the industry and the kinds of organizations you might want to work for down the line will help you make smart decisions when considering your future.

Salaries for Cybersecurity Employees

Although salary isn’t the most important factor to consider when making decisions about your education, it is a factor. How much money are you expecting to make once you break into the field? Multi-six-figure salaries are not uncommon at the highest levels of cybersecurity, but most entry-level jobs start at around $90,000 or lower annually.

A Versatile Degree

A cybersecurity degree offers graduates a variety of job options and opens up an exciting field where you can do a lot. Make your cybersecurity degree worth it by having a plan, focusing on your goals, and ensuring that this path truly leads to something you want to do. For people with the right drive and determination, cybersecurity can be a satisfying and rewarding career.

6 Ways to Improve Your Security Posture Using Critical Security Controls

$
0
0

As we near the end of 2018, technology professionals and businesses alike are looking back on the last 12 months and evaluating highs and lows. For businesses, this can be an essential step when it comes to evaluating the current state of security processes and protocol within the organization. The security landscape has grown more complex in the past year and will continue to transform in the year ahead, and a revamp of security controls can make a world of difference in protecting your business against new threats.

Of course, security strategies and processes can be overwhelming to implement, and trying to stay up to date with policies can create stress for both users and the IT staff. A security posture assessment―including a review of security controls―can be a critical first step for any organization that wants to quickly identify their strengths and weaknesses and determine how to solidify their security defenses.

A security control can be implemented on physical property, computer systems or any asset. Security controls can then be used as a checklist of sorts against the current health of your organization’s security, but in general they also can be implemented as safeguards or countermeasures to avoid, detect, counteract or even minimize security risks. While there are many options for security controls, there are a handful of options that IT professionals may find the most effective when managing their security posture.

The Center for Internet Security (CIS) has established 20 controls for organizations to follow to protect themselves from cyberattack. The first six basic CIS controls are designed as an ideal checklist, or baseline, and a great first step to allow teams to keep up with IT security management best practices:

Inventory and Control of Hardware

All physical equipment needs to be actively managed. Asset management is vital, especially when it comes to security. It’s important to be aware of all the assets interacting in your system, because you can be assured that a hacker is aware of everything you have and is scanning for devices going on and offline to identify which ones are not patched. You are not able to apply controls or validate that everything is accurately applied until you are fully aware of each and every device. If you are not patching all of your devices, they are vulnerable.

Inventory and Control of Software Assets

Attackers continuously scan target organizations looking for vulnerable versions of software that can be remotely exploited. Companies cannot protect what they cannot manage―it’s important to be aware of all assets in your IT infrastructure, including software. Keeping track of all software, including third-party software, is no light task. Auditing and actively managing inventory is vital. Tracking and correcting all software on the network includes ensuring only authorized software is installed and can execute. If unauthorized and unmanaged software is found, it needs to be prevented from installation or execution.

Continuous Vulnerability Management

Attackers have access to the same information and can take advantage of gaps. Companies that do not scan for vulnerabilities and proactively work to discover flaws first can be easily compromised. Continuous vulnerability management is not just for the operating system. Determine if you have passwords that are public or if your databases have any open vulnerabilities. Monitoring a network continuously is absolutely necessary to be able to update, track and change vulnerabilities. Luckily, there are tools that can identify vulnerabilities and allow users to verify what the impact is on their system.

Controlled Use of Administrative Privileges

The misuse of administrative privileges can allow attackers to spread throughout your IT infrastructure. It can be crucial to have a logging and event tool that can run continuously and alert you if an account has been added or removed from any type of admin group. Administrative access granted to the wrong person could be catastrophic to your security posture.

Not everybody should have access to all administrative privileges. The people who have administrative access rights should be a very small group. You also need your IT professionals to be comfortable with this group, because they should never want to go around them to receive access to anything.

Secure Configuration for Hardware and Software on Mobile Devices, Laptops, Workstations and Servers

While complex, this control needs to fortify the security configurations on hardware and software on mobile devices, laptops, workstations and servers. This security control must track, correct and report when needed.

As IoT grows, this can be even more important, since all of these devices can be connected. Whether it’s the day an iOS update is released or a problem with the network, it’s important to watch the ebbs and flows for any red flags.

Maintenance, Monitoring and Analysis of Audit Logs

Active logging and passive logging are different. We often turn on logging and feel comfortable with it running in the background. Passive logging grants you the ability to go back and look at different events and determine the cause of any issues. Active logging means you are responding to intrusions and anomalies in real time.

Without passive logging providing us a baseline for threshold monitoring, it’s difficult to determine the health of a system. The combination of these actions allows us to perform almost a health checkup of sorts on our system.

Don’t Forget Regular Cyberhygiene

There are always ways to improve upon any organization security posture, and that includes maintaining good cybersecurity hygiene. Whether that’s upgrading aging infrastructure and systems or consistently backing up data and teaching users how to use complex passwords, it’s vital to keep up strong cybersecurity hygiene throughout an organization.

Choosing to implement a security assessment can be a great first step toward a healthier and more secure organization. Beginning with the six basic and vital controls can offer a great backbone for a modern and fully developed security program.

A hierarchy of data security controls

$
0
0

For most enterprise IT security professionals, there are some common reasons that we need to protect a given data set. For the most part, they fall into a few easy categories:

Meeting a compliance or regulatory requirement Implementing best practices Minimizing the chance of a data breach of PII Protecting sensitive financial data, intellectual property or secrets Or complying with a customer or supplier requirement

However, the controls needed to comply with the requirements sometimes aren’t so clear. The decision about which controls to implement needs to be the result of a risk/benefit calculation which depends on the priorities of the organization, the risks that need to be mitigated against as well as the time and budget required for implementation. With some exceptions for cloud environments (which we’ll cover shortly) the difficulty of implementation grows with the level of protection provided.

What I’ll cover here is a brief overview of the layers where protections can be applied, which data security risks can be mitigated at that protection layer, and then the data security controls that can be used to provide the risk mitigation needed, along with some background about implementation difficulty for each control type.

Data Security Protection Layers, Risks, and Controls
A hierarchy of data security controls

The physical layer:The lowest level of protection is applied at the physical media level the actual flash or conventional disks, or other storage media that contain the information that needs protection. Protecting data at this level typically only protects against the physical loss, theft or improper retirement of the media. The controls used are typically full disk encryption (FDE), KMIP key management of encryption for arrays or SAN systems or encryption of a tape or a VM image.

For laptops and transportable physical media (like tapes), this level of encryption is a great control. If the encrypted item is lost, stolen or thrown away, there’s no risk of exposure. As the keys for decrypting the data are inaccessible or elsewhere, no access to the media is possible. But it’s a poor data security control for use with systems that need access to sensitive information within a data center or cloud environment.

As soon as the system authorized to access the information is booted and in operation, no limits on access to data are provided by this level of solution. A compromise of the system can easily result in an immediate compromise of the sensitive data it has access to.

Yes In a data center, this control can be implemented quickly (if turned on at deployment) and easily used to meet some compliance and regulatory requirements for protecting data. However, it’s an ineffective “check box” rather than useful protection in these environments.

The system layer:The next level of protection available is at the system level. Applying file or volume level encryption plus access controls and/or privileged access management.

File or volume level encryption and access controls protect against users at the system level having improper or unnecessary access to sensitive data. At their base level, they can be applied to protect against system level roles and privileged user access (even root users on linux/Unix systems), LDAP/Active Directory roles/users, Hadoop users/groups/zones and container users and groups. These controls typically allow users and process that require access for their work or operation to see cleartext; all others see only ciphertext. Here’s an example for protecting a database file: Access to the database file is only authorized from a signed database process and user. All others are authorized to see only metadata or ciphertext.

This helps to meet needs for organizations with compliance requirements that mandate access controls or need to meet data breach safe harbor criteria so that if data is lost, and the encryption keys are not, no breach of private data is deemed to have taken place. Usually, these tools also include a secure audit and management capability required by best practices or compliance standards.

File level encryption and access controls are also a critical component of data security when deploying to Infrastructure as a Service (IaaS) cloud environments. When deployed with enterprise premises key management capability, they ensure that information stays under enterprise control, by excluding privileged access from cloud admin level users and controls. This also protects against compromise at the cloud provider or even a failure of the cloud provider, as without access to the encryption keys, no access to an enterprise’s data in the cloud is possible.

Privileged access management tools provide primarily complementary capabilities to file/volume encryption and access control solutions (with a typically rich set of features for managing how privileged users work). The solutions are complementary because PAM solutions lack an enforcement control to ensure that if information is improperly accessed it can’t be used, such as that provided by file/volume level encryption and access controls.

Last, file and volume level encryption and access controls are fairly quick and painless to deploy. Typically no changes to operations and user workflows are required. Once deployed, protection is in place, and operations otherwise continue “as usual.” They are often the best compromise of protection level and deployment/operational ease for existing applications because of this transparent operation reducing attack surfaces available for system level threats with quick rollout and minimal impact on application operation and use.

Application and Database layer protections:While file level encryption and access controls protect well against system level threats, they are not designed to protect against improper access to data from within an application or database environment. To mitigate this next level or risk, tools that allow encryption of data, and access control from within applications and databases are needed.

These are best implemented when applications are first developed or are refreshed, as they typically require either application or database configuration changes (or both), that can be challenging to implement for a critical application while in operation. These controls include:

Application encryption Database column encryption TDE Key Management Tokenization Data masking Database Access Monitoring

Each serves a separate need

Tokenization (for instance) is used both to take servers using credit card data out of audit requirements by replacing data with a “token” that represents the information, as well as for meeting needs to such as obscuring driver’s license numbers, national identity or insurance numbers, social security information. Controls coded within applications ensure that only authorized users can see real data, rather than a token. Application encryption is used to make it easy for programmers to add encryption to application data files or database columns when building new applications. TDE key management ensures that the master encryption keys used with native Oracle and SQL encryption capabilities are managed in a secure and compliant manner ensuring that these database encryption methods aren’t compromised by poor key management. And database access monitoring watches patterns of access to information within the database by users to recognize accounts that may have been compromised.

The Cloud SaaS and Service layer:for the most part, Cloud SaaS applications are like a black box to enterprise customers. They can only use the service, not control how their information within the SaaS application is used, stored or protected. However, in the last few years, SaaS applications like Salesforce, Microsoft Office 365, AWS S3 storage and others have started to offer the capability to encrypt data within their environment, with enterprises keeping and controlling their encryption keys.

The result regardless of where the data is physically stored, the enterprise controls access to it via control of their encryption keys. Cloud vendors are effectively barred from accessing enterprise data, and compromises or legal action at the cloud provider’s location can’t compel access to their information either. However, as SaaS vendors increasingly add this capability to their offerings, a new problem arises. As the typical enterprise now uses 25 to 50 or more SaaS applications (see this year’s Thales Data Threat Report ), managing encryption keys and lifecycles can become a challenging problem.

To meet this need BYOK and cloud key management applications are available that make it easy to manage encryption keys and key lifecycles to meet compliance, best practice, and regulatory requirements.

Last note This summary should give you a good starting point in deciding which security controls are required based on the needs of your organization. And “Yes” Thales has a strong, cost effective and well implemented platform of data security solutions that can help you meet your data security needs in enterprise data centers, cloud environments, big data implementations, container deployments and more.

The post A hierarchy of data security controls appeared first on Data Security Blog | Thales eSecurity .

GUEST ESSAY: 5 security steps all companies should adopt from the Intelligence C ...

$
0
0

The United States Intelligence Community , or IC, is a federation of 16 separate U.S. intelligence agencies, plus a 17th administrative office.

The IC gathers, stores and processes large amounts of data, from a variety of sources, in order to provide actionable information for key stakeholders. And, in doing so, the IC has developed an effective set of data handling and cybersecurity best practices.

Related video: Using the NIST framework as a starting point

Businesses at large would do well to model their data collection and security processes after what the IC refers to as the “intelligence cycle.” This cycle takes a holistic approach to detecting and deterring external threats and enforcing best-of-class data governance procedures.


GUEST ESSAY: 5 security steps all companies should adopt from the Intelligence C ...
The IC has been using this approach to generate reliable and accurate intelligence that is the basis for making vital national security decisions, in particular, those having to do with protecting critical U.S. infrastructure from cyber attacks.

In the same vein, businesses at large can use the intelligence cycle as a model to detect and deter any attacks coming from foreign intelligence services. Such threats impact more businesses than you may think.

Per a 2017 CNN source , nearly 100,000 agents from as many as 80 nations operate within the United States with the intention of targeting businesses to gainaccess to key U.S. infrastructure, personnel, and to steal proprietary intellectual property.


GUEST ESSAY: 5 security steps all companies should adopt from the Intelligence C ...

A. Hill

These threat actors in particular are targeting these sectors: chemicals, commercial facilities, communications, critical manufacturing, dams, defense industrial bases, emergency services, energy facilities, financial services, food and fgriculture, fealthcare and public health, information technology, nuclear reactors, materials and waste operations, transportation systems, and water and waterwaste systems.

Homeland Security lists the above sectors as the top 16 critical infrastructure sectors that have assets, systems, and networks, whether physical or virtual that are considered vital to the United States.

The Intelligence Cycle can be broken down into a five step process that results in dynamic solutions:

Planning. Determine the issues to be addressed and what information could be gathered to provide answers.

Collection.Gather raw data from various sources.

Processing.Synthesize the raw data into a usable state. Apply information and process management to yield insights.

Analysis.Integate and evaluate the data into actionable final intelligence products.

Dissemination.Deliver the final intelligence products to the policymakers or decision makers who requested the data.


GUEST ESSAY: 5 security steps all companies should adopt from the Intelligence C ...

E. Hill

Intelligence experts agree that each of these five steps is instrumental in developing useable data for key stakeholders. This cycle results in a sense of mission and transparency to people carrying out their day-to-day tasks. And it allows information to flow freely and directly to those that have a need to know.

Today businesses at large face much the same threats as the IC. There is much to be gained by following the approach to collaboration, processes, and methodologies that continues to work so well for the IC.

About the authors: Angela Hill is co-founder and CEO, while Edwin Hill is co-founder and CIO of JADEX , LLC., a consultancy, based in Grand Rapids, Mich. that is veteran-, minority-, and woman-owned. JADEX helps organizations harness large data to incorporate solutions modeled from the Intelligence Community

Viewing all 12749 articles
Browse latest View live