Quantcast
Channel: CodeSection,代码区,网络安全 - CodeSec
Viewing all 12749 articles
Browse latest View live

Arlo unveils wireless Ultra security cam with 4K resolution & 180-degree v ...

$
0
0

By Roger Fingas

Friday, November 30, 2018, 06:33 am PT (09:33 am ET)

Arlo on Friday announced its latest iPhone-connected security camera, the Ultra, an upcoming wireless model distinguished by features like a 4K image sensor, HDR, and a 180-degree field of view.


Arlo unveils wireless Ultra security cam with 4K resolution & 180-degree v ...

The camera is designed to be used both indoors and outdoors, and comes with a magnetic mount as well as a weather-resistant magnetic charging cable. The outdoor focus also shows up in features such as an LED spotlight and a siren, the latter of which can be triggered manually or by rules for motion or sound detection.

Arlo is bundling the Ultra with a peripheral called the SmartHub, which serves as both a Wi-Fi extender for Arlo cameras and a way of recording footage to a microSD card. While locally-saved footage can be recorded at maxmium resolution, the Ultra will ship with a one-year Arlo Smart Premier cloud subscription, which normally only records 1080p highlight clips for up to 30 days ― 4K will be a paid upgrade.


Arlo unveils wireless Ultra security cam with 4K resolution & 180-degree v ...

Smart Premier costs $9.99 per month after the free period, and additionally offers support for motion zones, person detection, more detailed notifications, and up to 10 cameras. In the U.S. the plan comes with "e911," which can be used to guide police to your home instead of your iPhone.

People can also downgrade to cheaper Arlo Smart plan which costs $2.99 per month, but covers just a single camera while dropping e911 and reducing cloud storage to 7 days. A default free plan also offers 7 days, sacrificing motion zones, person detection, and enhanced notifications.

The Ultra should ship sometime in the first quarter of 2019. Arlo has yet to announce pricing, or even smarthome platform compatibility ― it's unlikely to support Apple's HomeKit however, since Apple currently doesn't allow on the technology on fully wireless cameras. The only Arlo with HomeKit is theArlo Baby.


Marriott Claims Up to 500 Million Guests Had Their Records Hacked

$
0
0

Marriott Claims Up to 500 Million Guests Had Their Records Hacked
Photo: Getty

Marriott, one of the world’s largest hotel chains, announced on Friday that it has experienced a jaw-dropping data breach that may have exposed the personal data of up to 500 million guests going all the way back to 2014.

In a filing with the SEC , Marriott explained that it first learned about the breach on September 8 when a security tool alerted administrators that someone was attempting to gain unauthorized access to its Starwood reservation system in the United States. Here’s Marriott’s explanation of what happened next:

Marriott quickly engaged leading security experts to help determine what occurred. Marriott learned during the investigation that there had been unauthorized access to the Starwood network since 2014. The company recently discovered that an unauthorized party had copied and encrypted information, and took steps towards removing it. On November 19, 2018, Marriott was able to decrypt the information and determined that the contents were from the Starwood guest reservation database.

The way the statement is worded is a bit confusing, but it appears to be saying the intruders did manage to obtain an encrypted copy of the database before trying to remove evidence of their activities. We’ve reached out to Marriott to clarify exactly what the company means and we’ll update this post when we receive a reply.

Marriott said its team is still “identifying duplicate information” on its database but it believes the hackers were able to access the data of around 500 million guests. And we’re talking about a lot of data points. The list includes: “some combination of name, mailing address, phone number, email address, passport number, Starwood Preferred Guest (“SPG”) account information, date of birth, gender, arrival and departure information, reservation date, and communication preferences.” It said that credit card numbers were included for some guests but they were obscured with standard AES-128 encryption. It’s still unclear if the attackers also obtained the necessary keys to decrypt the credit card info.

Marriott obtained the Starwood hotels brandin 2016, so it appears the company may have inherited this problem since its researchers believe the intruders have had access since 2014. In its filing with the SEC, it said it will work to phase out Starwood systems.

Since Marriott has hotels in Europe, it will likely come under scrutiny by authorities from the EU and could face financial penalties under GDPR regulations . It has set up a dedicated website to answer customer questions and said it will begin notifying customers individually via email.

[ Kroll , Marriott via TechCrunch ]

解析XP版永恒之蓝中的一个Bug

$
0
0

解析XP版永恒之蓝中的一个Bug
0x00 背景

永恒之蓝漏洞刚出来时,我可以顺利搞定windows 7,但在攻击Windows XP时我一直没有成功。我尝试了各种补丁和Service Pack的组合,但利用程序要么无法成功,要么会导致系统蓝屏。当时我没有深入研究,因为FuzzBunch(NSA泄露工具集)还有待探索许多点。

直到有一天,我在互联网上找到了一个Windows XP节点,我想尝试一下FuzzBunch。令人惊讶的是,在第一次尝试时,漏洞利用竟然成功了。

那么问题来了,为什么在我的“实验”环境中,漏洞利用无法成功,而实际环境中却可以?

这里先揭晓答案:在单核/多核/PAE CPU上NT/HAL的实现有所区别,因此导致FuzzBunch的XP系统攻击载荷无法在单核环境中使用。

0x01 多条利用链

大家需要知道一点,EternalBlue(永恒之蓝)有多个版本。网上已经有人详细分析了Windows 7内核的利用原理,我和 JennaMagius 以及 sleepya_ 也研究过如何将其移植到Windows 10系统上。

然而对于Windows XP而言,FuzzBunch包含一个完全不同的利用链,不能使用完全相同的基本原语(比如该系统中并不存在SMB2以及SrvNet.sys)。我在DerbyCon 8.0演讲中深入讨论过这方面内容(参考 演示文稿 及 演讲视频 )。

在Windows XP上, KPCR (Kernel Processor Control Region)启动处理器为静态结构,为了执行shellcode,我们需要覆盖 KPRCB . PROCESSOR_POWER_STATE .IdleFunction的值。

0x02 载荷工作方式

事实证明,漏洞利用在实验环境中没有问题,出现问题的是FuzzBunch的攻击载荷。

ring 0 shellcode主要会执行如下几个步骤:

1、使用现在已弃用的 KdVersionBlock 技巧获得 nt 及 hal 地址;

2、解析利用过程中需要用到的一些函数指针,如 hal!HalInitializeProcessor ;

3、恢复在漏洞利用过程中被破坏的KPCR/KPRCB启动处理器结构体;

4、运行 DoublePulsar ,利用SMB服务安装后门;

5、恢复正常状态执行流程( nt!PopProcessorIdle )。

单核分支异常

在 IdleFunction 分支以及 +0x170 进入shellcode处(经过XOR/Base64 shellcode解码器初始处理之后)设置硬件断点(hardware breakpoint)后,我们可以看到搭载多核处理器主机的执行分支与单核主机有所不同。

kd> ba w 1 ffdffc50 "ba e 1 poi(ffdffc50)+0x170;g;"

多核主机上能找到指向 hal!HalInitializeProcessor 的一个函数指针。


解析XP版永恒之蓝中的一个Bug

该函数可能用来清理处于半损坏状态的KPRCB。

单核主机上并没有找到 hal!HalInitializeProcessor , sub_547 返回的是 NULL 。攻击载荷无法继续运行,会尽可能将自身置零来清理现场,并且会设置ROP链来释放某些内存,恢复执行流程。


解析XP版永恒之蓝中的一个Bug

注意:shellcode成功执行后,也会在首次安装DoublePulsar后执行此操作。

0x03 根源分析

shellcode函数 sub_547 无法在单核CPU主机上正确找到 hal!HalInitializeProcessor 的地址,因此会强制终止整个载荷执行过程。我们需要逆向分析shellcode函数,找到攻击载荷失败的确切原因。

这里内核shellcode中存在一个问题,没有考虑到Windows XP上所有可用的不同类型的NT内核可执行文件。更具体一点,多核处理器版的NT程序(比如 ntkrnlamp.exe )可以正常工作,而单核版的(如 ntoskrnl.exe )会出现问题。同样, halmacpi.dll 与 halacpi.dll 之间也存在类似情况。

sub_547 所执行的第一个操作是获取NT程序所使用的HAL导入函数。 攻击载荷首先会读取NT程序中 0x1040 偏移地址来查找HAL函数。

在多核主机的Windows XP系统中,读取这个偏移地址能达到预期效果,shellcode能正确找到 hal!HalQueryRealTimeClock 函数:


解析XP版永恒之蓝中的一个Bug

然而在单核主机上,程序中并没有HAL导入表,使用的是字符表:


解析XP版永恒之蓝中的一个Bug

一开始我认为这应该是问题的根本原因,但实际上这只是一个幌子,因为这里存在修正码(correction code)的问题。shellcode会检查 0x1040 处的值是否是位于HAL范围内的一个地址。如果不满足条件,则会将该值减去 0xc40 ,然后以 0x40 增量值在HAL范围内开始搜索相关地址,直到搜索地址再次到达 0x1040 为止。


解析XP版永恒之蓝中的一个Bug

最终,单核版载荷会找到一个HAL函数,即 hal!HalCalibratePerformanceCounter :


解析XP版永恒之蓝中的一个Bug

目前一切操作都没有问题,可以看到Equation Group(方程式组织)在能够检测不同类型的XP NT程序。

HAL可变字节表

现在shellcode已经找到了HAL中的一个函数,会尝试定位 hal!HalInitializeProcessor 。shellcode内置了一张表(位于 0x5e7 偏移处),表中包含1字节的长度字段,随后为预期的字节序列。shellcode会递增最开始发现的HAL函数地址,将新函数的前 0x20 字节与表中字节进行对比。

我们可以在多核版的HAL中找到待定位的5字节数据:


解析XP版永恒之蓝中的一个Bug

然而,单核版的HAL情况有所不同:


解析XP版永恒之蓝中的一个Bug

这里有一个类似的 mov 指令,但该指令并不是 movzx 指令。这个函数中并没有shellcode搜索的字节序列,因此shellcode发现不了这个函数。

0x04 总结

大家都知道,在Windows的不同版本和Service Pack中,想通过搜索字节序列来识别函数并不是一件靠谱的事情(这一点我们可以从Windows内核开发邮件列表上的各种争论一窥究竟)。从这个bug中,我们学到了一个教训:漏洞利用开发者必须考虑周全,注意NTOSKRNL和HAL在单核/多核/PAE上存在的差异。

非常奇怪的是,漏洞利用开发者会在攻击载荷中使用 KdVersionBlock 技巧和字节序列搜索技术来查找函数。在Windows 7载荷中,开发者会从KPCR IDT开始向前搜索内存,然后解析PE头,最终找到NT程序及其导出表,这是一种更可靠的处理方式。

我们也可以通过其他方法来找到这个HAL函数(比如通过HAL导出方式),也可以通过其他方法来清理被破坏的KPCR结构,这些工作留待读者来完成。

有间接证据表明,漏洞开发人员在2001年末开始开发FuzzBunch的主要框架。开发人员似乎只在多核处理器上编写并测试攻击载荷?也许这可能是一个线索,表明攻击者开发XP版漏洞利用程序的时间段。Windows XP于2001年10月25日发布,虽然在同一年IBM发明了第一款双核处理器(POWER4),但英特尔和AMD直到2004年和2005年才开始提供类似产品。

这也是ETERNAL系列漏洞利用演进过程的一个例子。方程式组织可能会重复使用相同的漏洞利用和攻击载荷原语,但会使用不同的方法来开发漏洞,这样如果一种方法无法成功,也可以通过漏洞利用多样化特点来最终完成攻击任务。研究这些漏洞利用样本后,我们可以从中学到许多深奥的Windows内核内部原理。

成为“黑客”前,必须掌握的“网络协议端口”

$
0
0

上篇文章中,我们针对“网络通信原理”做了详细描述,在通信原理里也提到了端口这个概念,但是没有详细讲解,今天我详细讲解一下“网络协议端口”,因为这个“东东”也是黑客们常常利用渗透入侵的手段。


成为“黑客”前,必须掌握的“网络协议端口”
一、先讲一讲几种不同的“端口”的定义

计算机"端口"是英文port的译义,可以认为是计算机与外界通讯交流的出口。其中硬件领域的端口又称接口,如:USB端口、串行端口等。

软件领域的端口一般指网络中面向连接服务和无连接服务的通信协议端口,是一种抽象的软件结构,包括一些数据结构和I/O(基本输入输出)缓冲区。

在网络技术中,端口(Port)有好几种意思。集线器、交换机、路由器的端口指的是连接其他网络设备的接口,如RJ-45端口、Serial端口等。

而我们今天要讲的“网络协议端口”不是指物理意义上的端口,而是特指TCP/IP协议中的端口,是逻辑意义上的端口。

二、网络协议端口简单描述

网络协议中的端口指的是什么呢?如果把IP地址比作一间房子 ,端口就是出入这间房子的门。真正的房子只有几个门,但是一个IP地址的端口 可以有65536(即:256×256)个之多!端口是通过端口号来标记的,端口号只有整数,范围是从0 到65535(256×256)。

在Internet上,各主机间通过TCP/TP协议发送和接收数据报,各个数据报根据其目的主机的ip地址来进行互联网络中的路由选择。可见,把数据报顺利的传送到目的主机是没有问题的。问题出在哪里呢?我们知道大多数操作系统都支持多程序(进程)同时运行,那么目的主机应该把接收到的数据报传送给众多同时运行的进程中的哪一个呢?显然这个问题有待解决,端口机制便由此被引入进来。

操作系统会给那些有需求的进程分配协议端口(protocal port,即我们常说的端口),每个协议端口由一个正整数标识,如:80,139,445,等等。当目的主机接收到数据报后,将根据报文首部的目的端口号,把数据发送到相应端口,而与此端口相对应的那个进程将会领取数据并等待下一组数据的到来。说到这里,端口的概念似乎仍然抽象,那么继续听我继续讲解。

端口其实就是队,操作系统为各个进程分配了不同的队,数据报按照目的端口被推入相应的队中,等待被进程取用,在极特殊的情况下,这个队也是有可能溢出的,不过操作系统允许各进程指定和调整自己的队的大小。

接受数据报的进程需要开启它自己的端口,发送数据报的进程也需要开启端口,这样,数据报中将会标识有源端口,以便接受方能顺利的回传数据报到这个端口。

三、“网络协议端口”详解

常常在网络上听说“我的主机开了多少的 port ,会不会被入侵呀!?”或者是说“开那个 port 会比较安全?又,我的服务应该对应什么 port 呀!?”很神奇吧!怎么一部主机上面有这么多的奇怪的 port 呢?这个 port 有什么作用呢?!

由于每种网络的服务功能都不相同,因此有必要将不同的封包送给不同的服务来处理,所以Up,当你的主机同时开启了 FTP 与 WWW 服务的时候,那么别人送来的资料封包,就会依照 TCP 上面的 port 号码来给 FTP 这个服务或者是 WWW 这个服务来处理,当然就不会错乱!很多人会问说:“为什么计算机同时有 FTP、WWW、E-Mail 这么多服务,传资料过来,计算机怎么知道如何判断?计算机真的都不会误判吗?”现在知道为什么了吧?!“对啦!就是因为 port 不同嘛”!每一种服务都有特定的 port 在监听!您无须担心计算机会误判的问题。

每一个 TCP 联机都必须由一端(通常为 client )发起请求这个 port 通常是随机选择大于 1024 以上的 port 号来进行!其 TCP 封包会将(且只将) SYN 旗标设定起来!这是整个联机的第一个封包; 如果另一端(通常为 Server ) 接受这个请求的话(当然,特殊的服务需要以特殊的 port 来进行,例如 FTP 的port 21 ),则会向请求端送回整个联机的第二个封包!其上除了 SYN 旗标之外同时还将 ACK 旗标也设定起来,并同时时在本机端建立资源以待联机之需;然后,请求端获得服务端第一个响应封包之后,必须再响应对方一个确认封包,此时封包只带 ACK 旗标(事实上,后继联机中的所有封包都必须带有 ACK 旗标);

只有当服务端收到请求端的确认( ACK )封包(也就是整个联机的第三个封包)之后,两端的联机才能正式建立。这就是所谓的 TCP 联机的'三段式交握( Three-Way Handshake )'的原理。经过三向交握之后,你的 client 端的 port 通常是高于 1024 的随机取得的 port 至于主机端则视当时的服务是开启哪一个 port 而定,例如 WWW 选择80 而 FTP 则以 21 为正常的联机信道!

四、端口的分类 1. 按对应的协议类型端口有两种

一种是TCP端口,一种是UDP端口。计算机之间相互通信的时候,分为两种方式:一种是发送信息以后,可以确认信息是否到达,也就是有应答的方式,这种方式大多采用TCP协议;一种是发送以后就不管了,不去确认信息是否到达,这种方式大多采用UDP协议。对应这两种协议的服务提供的端口,也就分为TCP端口和UDP端口。

由网络OSI七层协议可知,TCP/UDP是工作在传输层的,传输层与网络层最大的区别是传输层提供进程通信能力,网络通信的最终地址不仅包括主机地址,还包括可描述进程的某种标识。所以TCP/IP协议提出的协议端口,可以认为是网络通信进程的一种标识符。

在应用程序中(调入内存运行后一般称为:进程)通过系统调用与某端口建立连接(binding,绑定)后,传输层传给该端口的数据都被相应的进程所接收,相应进程发给传输层的数据都从该端口输出。在TCP/IP协议的实现中,端口操作类似于一般的I/O操作,进程获取一个端口,相当于获取本地唯一的I/O文件,可以用一般的读写方式访问类似于文件描述符,每个端口都拥有一个叫端口号的整数描述符,用来区别不同的端口。由于TCP/IP传输层的TCP和UDP两个协议是两个完全独立的软件模块,因此各自的端口号也相互独立。如TCP有一个255号端口,UDP也可以有一个255号端口,两者并不冲突。端口号有两种基本分配方式:第一种叫全局分配这是一种集中分配方式,由一个公认权威的中央机构根据用户需要进行统一分配,并将结果公布于众,第二种是本地分配,又称动态连接,即进程需要访问传输层服务时,向本地操作系统提出申请,操作系统返回本地唯一的端口号,进程再通过合适的系统调用,将自己和该端口连接起来(binding,绑定)。TCP/IP端口号的分配综合了以上两种方式,将端口号分为两部分,少量的作为保留端口,以全局方式分配给服务进程。每一个标准服务器都拥有一个全局公认的端口叫周知口,即使在不同的机器上,其端口号也相同。剩余的为自由端口,以本地方式进行分配。TCP和UDP规定,小于256的端口才能作为保留端口。

2. 按端口号可分为3大类: 公认端口(WellKnownPorts):从0到1023,它们紧密绑定(binding)于一些服务。通常这些端口的通讯明确表明了某种服务的协议。例如:80端口实际上总是HTTP通讯。 注册端口(RegisteredPorts):从1024到49151。它们松散地绑定于一些服务。也就是说有许多服务绑定于这些端口,这些端口同样用于许多其它目的。例如:许多系统处理动态端口从1024左右开始。 动态和/或私有端口(Dynamicand/orPrivatePorts):从49152到65535。理论上,不应为服务分配这些端口。实际上,机器通常从1024起分配动态端口。但也有例外:SUN的RPC端口从32768开始。 五、已知服务、木马常用端口列表 1. TCP端口 7 = 回显 9 = 丢弃 11 = 在线用户 13 = 时间服务 15 = 网络状态 17 = 每日引用 18 = 消息发送 19 = 字符发生器 20 = ftp数据 21 = 文件传输 22 = SSH端口 23 = 远程终端 25 = 发送邮件 31 = Masters Paradise木马 37 = 时间 39 = 资源定位协议 41 = DeepThroat木马 42 = WINS 主机名服务 43 = WhoIs服务 58 = DMSetup木马 59 = 个人文件服务 63 = WHOIS端口 69 = TFTP服务 70 = 信息检索 79 = 查询在线用户 80 = WEB网页 88 = Kerberros5认证 101 = 主机名 102 = ISO 107 = 远程登录终端 109 = pop2邮件 110 = pop3邮件 111 = SUN远程控制 113 = 身份验证 117 = UUPC 119 = nntp新闻组 121 = JammerKillah木马 135 = 本地服务 138 = 隐形大盗 139 = 文件共享 143 = IMAP4邮件 146 = FC-Infector木马 158 = 邮件服务 170 = 打印服务 179 = BGP 194 = IRC PORT 213 = TCP OVER IPX 220 = IMAP3邮件 389 = 目录服务 406 = IMSP PORT 411 = DC++ 421 = TCP Wrappers 443 = 安全WEB访问 445 = SMB(交换服务器消息块) 456 = Hackers Paradise木马 464 = Kerberros认证 512 = 远程执行或卫星通讯 513 = 远程登录与查询 514 = SHELL/系统日志 515 = 打印服务 517 = Talk 518 = 网络聊天 520 = EFS 525 = 时间服务 526 = 日期更新 530 = RPC 531 = RASmin木马 532 = 新闻阅读 533 = 紧急广播 540 = UUCP 543 = Kerberos登录 544 = 远程shell 550 = who 554 = RTSP 555 = Ini-Killer木马 556 = 远程文件系统 560 = 远程监控 561 = 监控 636 = 安全目录服务 666 = Attack FTP木马 749 = Kerberos管理 750 = Kerberos V4 911 = Dark Shadow木马 989 = FTPS 990 = FTPS 992 = TelnetS 993 = IMAPS 999 = DeepThroat木马 1001 = Silencer木马 1010 = Doly木马 1011 = Doly木马 1012 = Doly木马 1015 = Doly木马 1024 = NetSpy木马 1042 = Bla木马 1045 = RASmin木马 1080 = SOCKS代理 1090 = Extreme木马 1095 = Rat木马 1097 = Rat木马 1098 = Rat木马 1099 = Rat木马 1109 = Kerberos POP 1167 = 私用电话 1170 = Psyber Stream Server 1214 = KAZAA下载 1234 = Ultors/恶鹰木马 1243 = Backdoor/SubSeven木马 1245 = VooDoo Doll木马 1349 = BO DLL木马 1352 = Lotus Notes 1433 = SQL SERVER 1492 = FTP99CMP木马 1494 = CITRIX 1503 = Netmeeting 1512 = WINS解析 1524 = IngresLock后门 1600 = Shivka-Burka木马 1630 = 网易泡泡 1701 = L2TP 1720 = H323 1723 = PPTP(虚拟专用网) 1731 = Netmeeting 1755 = 流媒体服务 1807 = SpySender木马 1812 = Radius认证 1813 = Radius评估 1863 = MSN聊天 1981 = ShockRave木马 1999 = Backdoor木马 2000 = TransScout-Remote-Explorer木马 2001 = TransScout木马 2002 = TransScout/恶鹰木马 2003 = TransScout木马 2004 = TransScout木马 2005 = TransScout木马 2023 = Ripper木马 2049 = NFS服务器 2053 = KNETD 2115 = Bugs木马 2140 = Deep Throat木马 2401 = CVS 2535 = 恶鹰 2565 = Striker木马 2583 = WinCrash木马 2773 = Backdoor/SubSeven木马 2774 = SubSeven木马 2801 = Phineas Phucker木马 2869 = UPNP(通用即插即用) 3024 = WinCrash木马 3050 = InterBase 3128 = squid代理 3129 = Masters Paradise木马 3150 = DeepThroat木马 3306 = mysql 3389 = 远程桌面 3544 = MSN语音 3545 = MSN语音 3546 = MSN语音 3547 = MSN语音 3548 = MSN语音 3549 = MSN语音 3550 = MSN语音 3551 = MSN语音 3552 = MSN语音 3553 = MSN语音 3554 = MSN语音 3555 = MSN语音 3556 = MSN语音 3557 = MSN语音 3558 = MSN语音 3559 = MSN语音 3560 = MSN语音 3561 = MSN语音 3562 = MSN语音 3563 = MSN语音 3564 = MSN语音 3565 = MSN语音 3566 = MSN语音 3567 = MSN语音 3568 = MSN语音 3569 = MSN语音 3570 = MSN语音 3571 = MSN语音 3572 = MSN语音 3573 = MSN语音 3574 = MSN语音 3575 = MSN语音 3576 = MSN语音 3577 = MSN语音 3578 = MSN语音 3579 = MSN语音 3700 = Portal of Doom木马 4080 = WebAdmin 4081 = WebAdmin+SSL 4092 = WinCrash木马 4267 = SubSeven木马 4443 = AOL MSN 4567 = File Nail木马 4590 = ICQ木马 4661 = 电驴下载 4662 = 电驴下载 4663 = 电驴下载 4664 = 电驴下载 4665 = 电驴下载 4666 = 电驴下载 4899 = Radmin木马 5000 = Sokets-de木马 5000 = UPnP(通用即插即用) 5001 = Back Door Setup木马 5060 = SIP 5168 = 高波蠕虫 5190 = AOL MSN 5321 = Firehotcker木马 5333 = NetMonitor木马 5400 = Blade Runner木马 5401 = Blade Runner木马 5402 = Blade Runner木马 5550 = JAPAN xtcp木马 5554 = 假警察蠕虫 5555 = ServeMe木马 5556 = BO Facil木马 5557 = BO Facil木马 5569 = Robo-Hack木马 5631 = pcAnywhere 5632 = pcAnywhere 5742 = WinCrash木马 5800 = VNC端口 5801 = VNC端口 5890 = VNC端口 5891 = VNC端口 5892 = VNC端口 6267 = 广外女生 6400 = The Thing木马 6665 = IRC 6666 = IRC SERVER PORT 6667 = 小邮差 6668 = IRC 6669 = IRC 6670 = DeepThroat木马 6711 = SubSeven木马 6771 = DeepThroat木马 6776 = BackDoor-G木马 6881 = BT下载 6882 = BT下载 6883 = BT下载 6884 = BT下载 6885 = BT下载 6886 = BT下载 6887 = BT下载 6888 = BT下载 6889 = BT下载 6890 = BT下载 6939 = Indoctrination木马 6969 = GateCrasher/Priority木马 6970 = GateCrasher木马 7000 = Remote Grab木马 7001 = windows messager 7070 = RealAudio控制口 7215 = Backdoor/SubSeven木马 7300 = 网络精灵木马 7301 = 网络精灵木马 7306 = 网络精灵木马 7307 = 网络精灵木马 7308 = 网络精灵木马 7424 = Host Control Trojan 7467 = Padobot 7511 = 聪明基因 7597 = QaZ木马 7626 = 冰河木马 7789 = Back Door Setup/ICKiller木马 8011 = 无赖小子 8102 = 网络神偷 8181 = 灾飞 9408 = 山泉木马 9535 = 远程管理 9872 = Portal of Doom木马 9873 = Portal of Doom木马 9874 = Portal of Doom木马 9875 = Portal of Doom木马 9898 = 假警察蠕虫 9989 = iNi-Killer木马 10066 = Ambush Trojan 10067 = Portal of Doom木马 10167 = Portal of Doom木马 10168 = 恶邮差 10520 = Acid Shivers木马 10607 = COMA木马 11000 = Senna Spy木马 11223 = Progenic木马 11927 = Win32.Randin 12076 = GJammer木马 12223 = Keylogger木马 12345 = NetBus木马 12346 = GabanBus木马 12361 = Whack-a-mole木马 12362 = Whack-a-mole木马 12363 = Whack-a-Mole木马 12631 = WhackJob木马 13000 = Senna Spy木马 13223 = PowWow聊天 14500 = PC Invader木马 14501 = PC Invader木马 14502 = PC Invader木马 14503 = PC Invader木马 15000 = NetDemon木马 15382 = SubZero木马 16484 = Mosucker木马 16772 = ICQ Revenge木马 16969 = Priority木马 17072 = Conducent广告 17166 = Mosaic木马 17300 = Kuang2 the virus Trojan 17449 = Kid Terror Trojan 17499 = CrazzyNet Trojan 17500 = CrazzyNet Trojan 17569 = Infector Trojan 17593 = Audiodoor Trojan 17777 = Nephron Trojan 19191 = 蓝色火焰 19864 = ICQ Revenge木马 20001 = Millennium木马 20002 = Acidkor Trojan 20005 = Mosucker木马 20023 = VP Killer Trojan 20034 = NetBus 2 Pro木马 20808 = QQ女友 21544 = GirlFriend木马 22222 = Proziack木马 23005 = NetTrash木马 23006 = NetTrash木马 23023 = Logged木马 23032 = Amanda木马 23432 = Asylum木马 23444 = 网络公牛 23456 = Evil FTP木马 23456 = EvilFTP-UglyFTP木马 23476 = Donald-Dick木马 23477 = Donald-Dick木马 25685 = Moonpie木马 25686 = Moonpie木马 25836 = Trojan-Proxy 25982 = Moonpie木马 26274 = Delta Source木马 27184 = Alvgus 2000 Trojan 29104 = NetTrojan木马 29891 = The Unexplained木马 30001 = ErrOr32木马 30003 = Lamers Death木马 30029 = AOL木马 30100 = NetSphere木马 30101 = NetSphere木马 30102 = NetSphere木马 30103 = NetSphere 木马 30103 = NetSphere木马 30133 = NetSphere木马 30303 = Sockets de Troie 30947 = Intruse木马 31336 = Butt Funnel木马 31337 = Back-Orifice木马 31338 = NetSpy DK 木马 31339 = NetSpy DK 木马 31666 = BOWhack木马 31785 = Hack Attack木马 31787 = Hack Attack木马 31788 = Hack-A-Tack木马 31789 = Hack Attack木马 31791 = Hack Attack木马 31792 = Hack-A-Tack木马 32100 = Peanut Brittle木马 32418 = Acid Battery木马 33333 = Prosiak木马 33577 = Son of PsychWard木马 33777 = Son of PsychWard木马 33911 = Spirit 2000/2001木马 34324 = Big Gluck木马 34555 = Trinoo木马 35555 = Trinoo木马 36549 = Trojan-Proxy 37237 = Mantis Trojan 40412 = The Spy木马 40421 = Agent 40421木马 40422 = Master-Paradise木马 40423 = Master-Paradise木马 40425 = Master-Paradise木马 40426 = Master-Paradise木马 41337 = Storm木马 41666 = Remote Boot tool木马 46147 = Backdoor.sdBot 47262 = Delta Source木马 49301 = Online KeyLogger木马 50130 = Enterprise木马 50505 = Sockets de Troie木马 50766 = Fore木马 51996 = Cafeini木马 53001 = Remote Windows Shutdown木马 54283 = Backdoor/SubSeven木马 54320 = Back-Orifice木马 54321 = Back-Orifice木马 55165 = File Manager木马 57341 = NetRaider木马 58339 = Butt Funnel木马 60000 = DeepThroat木马 60411 = Connection木马 61348 = Bunker-hill木马 61466 = Telecommando木马 61603 = Bunker-hill木马 63485 = Bunker-hill木马 65000 = Devil木马 65390 = Eclypse木马 65432 = The Traitor木马 65535 = Rc1木马 2. UDP端口 31 = Masters Paradise木马 41 = DeepThroat木马 53 = 域名解析 67 = 动态IP服务 68 = 动态IP客户端 135 = 本地服务 137 = NETBIOS名称 138 = NETBIOS DGM服务 139 = 文件共享 146 = FC-Infector木马 161 = SNMP服务 162 = SNMP查询 445 = SMB(交换服务器消息块) 500 = VPN密钥协商 666 = Bla木马 999 = DeepThroat木马 1027 = 灰鸽子 1042 = Bla木马 1561 = MuSka52木马 1900 = UPNP(通用即插即用) 2140 = Deep Throat木马 2989 = Rat木马 3129 = Masters Paradise木马 3150 = DeepThroat木马 3700 = Portal of Doom木马 4000 = QQ聊天 4006 = 灰鸽子 5168 = 高波蠕虫 6670 = DeepThroat木马 6771 = DeepThroat木马 6970 = ReadAudio音频数据 8000 = QQ聊天 8099 = VC远程调试 8225 = 灰鸽子 9872 = Portal of Doom木马 9873 = Portal of Doom木马 9874 = Portal of Doom木马 9875 = Portal of Doom木马 10067 = Portal of Doom木马 10167 = Portal of Doom木马 22226 = 高波蠕虫 26274 = Delta Source木马 31337 = Back-Orifice木马 31785 = Hack Attack木马 31787 = Hack Attack木马 31788 = Hack-A-Tack木马 31789 = Hack Attack木马 31791 = Hack Attack木马 31792 = Hack-A-Tack木马 34555 = Trin00 DDoS木马 40422 = Master-Paradise木马 40423 = Master-Paradise木马 40425 = Master-Paradise木马 40426 = Master-Paradise木马 47262 = Delta Source木马 54320 = Back-Orifice木马 54321 = Back-Orifice木马 60000 = DeepThroat木马 六、查看端口的相关方法和工具 1. netstat -an

在cmd中输入这个命令就可以了。如下:

C:>netstat-an ActiveConnections ProtoLocalAddressForeignAddressState TCP0.0.0.0:1350.0.0.0:0LISTENING TCP0.0.0.0:4450.0.0.0:0LISTENING TCP0.0.0.0:10250.0.0.0:0LISTENING TCP0.0.0.0:10260.0.0.0:0LISTENING TCP0.0.0.0:10280.0.0.0:0LISTENING TCP0.0.0.0:33720.0.0.0:0LISTENING UDP0.0.0.0:135*:* UDP0.0.0.0:445*:* UDP0.0.0.0:1027*:* UDP127.0.0.1:1029*:* UDP127.0.0.1:1030*:*

这是我没上网的时候机器所开的端口,两个135和445是固定端口,其余几个都是动态端口。

2. Strobe

超级优化TCP端口检测程序Strobe是一个TCP端口扫描器。它具有在最大带宽利用率和最小进程资源需求下,迅速地定位和扫描一台远程目标主机或许多台主机的所有TCP“监听”端口的能力。

3. Internet Scanner

Internet Scanner可以说是可得到的最快和功能最全的安全扫描工具,用于UNIX和Windows NT。它容易配置,扫描速度快,并且能产生综合报告。

4. Port Scanner

Port Scanner是一个运行于Windows 95 和Windows NT上的端口扫描工具,其开始界面上显示了两个输入框,上面的输入框用于要扫描的开始主机IP地址,下面的输入框用于输入要扫描的结束主机IP地址。在这两个IP地址之间的主机将被扫描。

5. Nmap

世界上最受黑客欢迎的扫描器,能实现秘密扫描、动态延迟、重发与平行扫描、欺骗扫描、端口过滤探测、RPC直接扫描、分布扫描等,灵活性非常好,功能强大

七、端口在入侵中的作用,我们该如何保护? 1. 端口在入侵中的作用

黑客曾经把目标终端比作房子,而把端口比作通向不同房间(服务)的门,入侵者要占领这间房子,势必要破门而入,那么对于入侵者来说,了解房子开了几扇门,都是什么样的门,门后面有什么东西就显得至关重要。

入侵者通常会用扫描器对目标主机的端口进行扫描,以确定哪些端口是开放的,从开放的端口,入侵者可以知道目标主机大致提供了哪些服务,进而猜测可能存在的漏洞,因此对端口的扫描可以帮助我们更好的了解目标主机,而对于管理员,扫描本机的开放端口也是做好安全防范的第一步。

2. 常被黑客利用的端口

一些端口常常会被黑客利用,还会被一些木马病毒利用,对计算机系统进行攻击,以下是被黑客入侵的的端口分析。

(1) 端口渗透剖析

FTP通常用作对远程服务器进行管理,典型应用就是对web系统进行管理。一旦FTP密码泄露就直接威胁web系统安全,甚至黑客通过提权可以直接控制服务器。这里以Serv_uFTP服务器为例,剖析渗透FTP服务器的几种方法。

对Serv_u5.004以及以下版本可直接使用溢出程序进行远程溢出,成功后可直接得到系统权限。使用kali 里面的metespolit渗透工具包进行溢出。这个工具是需要安装的。 暴力破解FTP密码,关键是字典的制作。一般用的破解工具是X-way。 读取Serv_u用户配置文件,并破解用户加密密码。一般使用webshell进行读取。 通过本地提权工具,可执行任意系统命令。 使用嗅探方式截取FTP密码,使用工具Cain进行渗透。

(2) 23端口渗透剖析

telnet是一种旧的远程管理方式,使用telnet工具登录系统过程中,网络上传输的用户和密码都是以明文方式传送的,黑客可使用嗅探技术截获到此类密码。

暴力破解技术是常用的技术,使用X-SCAN扫描器对其进行破解。 在linux系统中一般采用SSH进行远程访问,传输的敏感数据都是经过加密的。而对于windows下的telnet来说是脆弱的,因为默认没有经过任何加密就在网络中进行传输。使用cain等嗅探工具可轻松截获远程登录密码。 (3) 53端口渗透剖析

53端口是DNS域名服务器的通信端口,通常用于域名解析。也是网络中非常关键的服务器之一。这类服务器容易受到攻击。对于此端口的渗透,一般有三种方式。

使用DNS远程溢出漏洞直接对其主机进行溢出攻击,成功后可直接获得系统权限。 使用DNS欺骗攻击,可对DNS域名服务器进行欺骗,如果黑客再配合网页木马进行挂马攻击,无疑是一种杀伤力很强的攻击,黑客可不费吹灰之力就控制内网的大部分主机。这也是内网渗透惯用的技法之一。 拒绝服务攻击,利用拒绝服务攻击可快速的导致目标服务器运行缓慢,甚至网络瘫痪。如果使用拒绝服务攻击其DNS服务器。将导致用该服务器进行域名解析的用户无法正常上网。

(4) 80端口渗透剖析

80端口通常提供web服务。目前黑客对80端口的攻击典型是采用SQL注入的攻击方法,脚本渗透技术也是一项综合性极高的web渗透技术,同时脚本渗透技术对80端口也构成严重的威胁。

对于windows2000的IIS5.0版本,黑客使用远程溢出直接对远程主机进行溢出攻击,成功后直接获得系统权限。 对于windows2000中IIS5.0版本,黑客也尝试利用‘Microsoft IISCGI’文件名错误解码漏洞攻击。使用X-SCAN可直接探测到IIS漏洞。 IIS写权限漏洞是由于IIS配置不当造成的安全问题,攻击者可向存在此类漏洞的服务器上传恶意代码,比如上传脚本木马扩大控制权限。 普通的http封包是没有经过加密就在网络中传输的,这样就可通过嗅探类工具截取到敏感的数据。如使用Cain工具完成此类渗透。 80端口的攻击,更多的是采用脚本渗透技术,利用web应用程序的漏洞进行渗透是目前很流行的攻击方式。 对于渗透只开放80端口的服务器来说,难度很大。利用端口复用工具可解决此类技术难题。 CC攻击效果不及DDOS效果明显,但是对于攻击一些小型web站点还是比较有用的。CC攻击可使目标站点运行缓慢,页面无法打开,有时还会爆出web程序的绝对路径。

(5) 135端口的渗透剖析

135端口主要用于使用RPC协议并提供DCOM服务,通过RPC可以保证在一台计算机上运行的程序可以顺利地执行远程计算机上的代码;使用DCOM可以通过网络直接进行通信,能够跨包括HTTP协议在内的多种网络传输。同时这个端口也爆出过不少漏洞,最严重的就是缓冲区溢出漏洞,曾经疯狂一时的‘冲击波’病毒就是利用这个漏洞进行传播的。对于135端口的渗透,黑客的渗透方法为:

查找存在RPC溢出的主机,进行远程溢出攻击,直接获得系统权限。如用‘DSScan’扫描存在此漏洞的主机。对存在漏洞的主机可使用‘ms05011.exe’进行溢出,溢出成功后获得系统权限。 扫描存在弱口令的135主机,利用RPC远程过程调用开启telnet服务并登录telnet执行系统命令。系统弱口令的扫描一般使用X-SCAN和SHCAN。对于telnet服务的开启可使用工具Recton。

(6) 139/445端口渗透剖析

139端口是为‘NetBIOS SessionService’提供的,主要用于提供windows文件和打印机共享以及UNIX中的Samba服务。445端口也用于提供windows文件和打印机共享,在内网环境中使用的很广泛。这两个端口同样属于重点攻击对象,139/445端口曾出现过许多严重级别的漏洞。

下面剖析渗透此类端口的基本思路。

对于开放139/445端口的主机,一般尝试利用溢出漏洞对远程主机进行溢出攻击,成功后直接获得系统权限。 对于攻击只开放445端口的主机,黑客一般使用工具‘MS06040’或‘MS08067’.可使用专用的445端口扫描器进行扫描。NS08067溢出工具对windows2003系统的溢出十分有效,工具基本使用参数在cmd下会有提示。 对于开放139/445端口的主机,黑客一般使用IPC$进行渗透。在没有使用特点的账户和密码进行空连接时,权限是最小的。获得系统特定账户和密码成为提升权限的关键了,比如获得administrator账户的口令。 对于开放139/445端口的主机,可利用共享获取敏感信息,这也是内网渗透中收集信息的基本途径。

(7) 1433端口渗透剖析

1433是SQLServer默认的端口,SQL Server服务使用两个端口:tcp-1433、UDP-1434.其中1433用于供SQLServer对外提供服务,1434用于向请求者返回SQLServer使用了哪些TCP/IP端口。1433端口通常遭到黑客的攻击,而且攻击的方式层出不穷。最严重的莫过于远程溢出漏洞了,如由于SQL注射攻击的兴起,各类数据库时刻面临着安全威胁。利用SQL注射技术对数据库进行渗透是目前比较流行的攻击方式,此类技术属于脚本渗透技术。

对于开放1433端口的SQL Server2000的数据库服务器,黑客尝试使用远程溢出漏洞对主机进行溢出测试,成功后直接获得系统权限。 暴力破解技术是一项经典的技术。一般破解的对象都是SA用户。通过字典破解的方式很快破解出SA的密码。 嗅探技术同样能嗅探到SQL Server的登录密码。 由于脚本程序编写的不严密,例如,程序员对参数过滤不严等,这都会造成严重的注射漏洞。通过SQL注射可间接性的对数据库服务器进行渗透,通过调用一些存储过程执行系统命令。可以使用SQL综合利用工具完成。

(8) 1521端口渗透剖析

1521是大型数据库Oracle的默认监听端口,估计新手还对此端口比较陌生,平时大家接触的比较多的是Access,mssql以及MYSQL这三种数据库。一般大型站点才会部署这种比较昂贵的数据库系统。对于渗透这种比较复杂的数据库系统,黑客的思路如下:

Oracle拥有非常多的默认用户名和密码,为了获得数据库系统的访问权限,破解数据库系统用户以及密码是黑客必须攻破的一道安全防线。 SQL注射同样对Oracle十分有效,通过注射可获得数据库的敏感信息,包括管理员密码等。 在注入点直接创建java,执行系统命令。

(9) 3306端口渗透剖析

3306是MYSQL数据库默认的监听端口,通常部署在中型web系统中。在国内LAMP的配置是非常流行的,对于php+mysql构架的攻击也是属于比较热门的话题。mysql数据库允许用户使用自定义函数功能,这使得黑客可编写恶意的自定义函数对服务器进行渗透,最后取得服务器最高权限。对于3306端口的渗透,黑客的方法如下:

由于管理者安全意识淡薄,通常管理密码设置过于简单,甚至为空口令。使用破解软件很容易破解此类密码,利用破解的密码登录远程mysql数据库,上传构造的恶意UDF自定义函数代码进行注册,通过调用注册的恶意函数执行系统命令。或者向web目录导出恶意的脚本程序,以控制整个web系统。 功能强大的‘cain’同样支持对3306端口的嗅探,同时嗅探也是渗透思路的一种。 SQL注入同样对mysql数据库威胁巨大,不仅可以获取数据库的敏感信息,还可使用load_file()函数读取系统的敏感配置文件或者从web数据库链接文件中获得root口令等,导出恶意代码到指定路径等。

(10) 3389端口渗透剖析

3389是windows远程桌面服务默认监听的端口,管理员通过远程桌面对服务器进行维护,这给管理工作带来的极大的方便。通常此端口也是黑客们较为感兴趣的端口之一,利用它可对远程服务器进行控制,而且不需要另外安装额外的软件,实现方法比较简单。当然这也是系统合法的服务,通常是不会被杀毒软件所查杀的。使用‘输入法漏洞’进行渗透。

对于windows2000的旧系统版本,使用‘输入法漏洞’进行渗透。 针对windows2000终端服务的一个密码破解程序,这个程序被微软公司推荐给用户使用,来检查终端服务密码的强壮性。程序使用msrdp空间,可在本地虚拟远程终端连接窗口,通过密码字典进行破解。可以指定多种参数,使用比较灵活,破解速度视攻击主机与被攻击主机网络带宽来定。稍等下,虚拟机有点卡。我们先看第三种方法吧。 cain是一款超级的渗透工具,同样支持对3389端口的嗅探。 映像劫持与shift粘贴键的配合使用。通常安全人员配置服务器安全时,都会考虑使用功能强大的组策略。比如阻止非法攻击者执行cmd命令和拒绝非授权远程登录用户等(关于组策略的详细设置方法我们已经在信息系统安全工程师课程做了详细的讲解),即使你拥有管理员权限同样不能进行登录。黑客突破组策略的秘籍就在3389登录框这里,也就是映像劫持与shift粘贴键的配合使用,调出任务管理器然后在任务管理器中打开组策略编辑器,这里可根据实际情侣进行修改了。 社会工程学通常是最可怕的攻击技术,如果管理者的一切习惯和规律被黑客摸透的话,那么他管理的网络系统会因为他的弱点被渗透。

(11) 4899端口渗透剖析

4899端口是remoteadministrator远程控制软件默认监听的端口,也就是平时常说的radmini影子。radmini目前支持TCP/IP协议,应用十分广泛,在很多服务器上都会看到该款软件的影子。对于此软件的渗透,思路如下:

radmini同样存在不少弱口令的主机,通过专用扫描器可探测到此类存在漏洞的主机。 radmini远控的连接密码和端口都是写入到注册表系统中的,通过使用webshell注册表读取功能可读取radmini在注册表的各项键值内容,从而破解加密的密码散列。

(12) 5631端口渗透剖析

5631端口是著名远程控制软件symantecpcanywhere的默认监听端口,同时也是世界领先的远程控制软件。

赛门铁克Altiris权限提升漏洞分析(CVE-2018-5240)

$
0
0
前言

在近期的一项渗透测试实践中,我们在最新版本赛门铁克Management Agent(Altiris)中发现了一个安全漏洞,而这个安全漏洞将允许攻击者实现提权。

概述
赛门铁克Altiris权限提升漏洞分析(CVE-2018-5240)

当Altiris代理执行任务扫描时(例如软件扫描),SYSTEM级服务会在扫描任务执行完毕之后向NSI和OutBox目录重新申请权限。即:

C:\ProgramFiles\Altiris\Inventory\Outbox C:\ProgramFiles\Altiris\Inventory\NSI

申请到的权限会给‘Everyone’组成员提供这两个文件目录的完整控制权,并允许任何一名标准用户创建其他代替目录的链接。因此,‘Everyone’权限将会赋予给其他代替目录,从而该目录下的任何一份文件或文件夹都会继承这种完全控制权限。

这也就意味着,任何一名低权限用户都可以在安装了Symantec Management Agent v7.6, v8.0或v8.1RU7的终端设备上实现权限提升。

分析&发现

在执行渗透测试的过程中,我们经常会遇到各种各样安装了不同类型终端软件的主机设备。这些软件很可能就是我们的切入点,因为我们可以利用它们来实现提权,或者实现横向渗透。

在这些终端管理软件中,我们经常会见到的就是赛门铁克的Altiris。这个软件是一款终端管理框架,它不仅可以帮助组织或管理员确保设备及时安装了最新版本的操作系统补丁或软件更新,还可以检查用户或组权限。

我们这一次测试的版本是v7.6,不过赛门铁克方面也证实了,在最新补丁发布之前的所有Altiris版本都会受到这个问题的影响。


赛门铁克Altiris权限提升漏洞分析(CVE-2018-5240)

我们发现,Altiris文件架构中的目录都应用了‘Everyone-完整控制’权限。这些目录看起来存储的是合法内容,例如扫描配置文件和XML文件等等。但是这些目录和文件的权限都使用了一行简单的PowerShell代码,并允许我们查看任意windows主机的ACL权限:

Get-ChildItemC:\ -Recurse -ErrorAction SilentlyContinue | ForEach-Object {try {Get-Acl -Path$_.FullName | Select-Object pschildname,pspath,accesstostring}catch{}}|Export-Csv C:\temp\acl.csv -NoTypeInformation

在查看这些文件目录的时间戳时,我们发现这些目录中的文件时间戳每天都会发生变化。深入研究之后,我们发现这些文件会在Altiris执行完系统或软件扫描之后被修改。现在,根据不同组织对配置和扫描任务的需求,这样的情况每天还有可能发生若干次。

接下来的事情就非常有趣了,当我们发现了这种特性之后,我们打算看看Cylance近期披露的攻击方式在这里是否有效【 参考资料 】。

下面给出的是NSI文件夹的目录权限,这个目录的权限跟Outbox目录是相同的:


赛门铁克Altiris权限提升漏洞分析(CVE-2018-5240)

接下来,我们可以尝试使用James Forshaw的符号连接测试工具来将该目录重定向到其他位置,然后创建一个其他目录的挂载点,看看这个目录下的文件是否会被改写,而事实是我们成功了。当然了,我们还可以使用 sysinternals 的链接工具,但是这个工具要求源目录不存在,但是我们这里的目录已经存在并拥有‘Everyone’权限了。比如说:


赛门铁克Altiris权限提升漏洞分析(CVE-2018-5240)

如果我们把这个目录删除,我们就没有权限去实现这种攻击了。而James Forshaw的工具允许覆盖已存在的目录:


赛门铁克Altiris权限提升漏洞分析(CVE-2018-5240)

在这种攻击技术中还可以使用另一个名叫mklink.exe的Windows工具,但是该工具需要高级权限,这里就不适用了,因为我们要做的就是提权。

攻击分析

我们应该如何实现攻击呢?别担心,我们有很多种方法来利用这个漏洞,但是最简单的方法就是去尝试覆盖整个Altiris根目录(“C:\Program Files\Altiris\AlritisAgent\”)权限,这样我们就可以修改SYSTEM账号下运行的服务代码了,也就是AeNXSAgent.exe。

下面的截图显示的是在挂载点修改权限之前Altiris Agent目录以及AsNXSAgent.exe服务代码的权限:


赛门铁克Altiris权限提升漏洞分析(CVE-2018-5240)
赛门铁克Altiris权限提升漏洞分析(CVE-2018-5240)

接下来,我们创建一个指向Altiris Agent目录的挂载点,运行之后我们就可以让每一个文件拥有完整权限了,实现起来非常简单。这里我们可以使用James Forshaw的 符号链接测试工具 来创建和验证挂载点。


赛门铁克Altiris权限提升漏洞分析(CVE-2018-5240)

接下来,我们只需要目标主机再次执行扫描任务,下面的截图显示的就是我们的成果:


赛门铁克Altiris权限提升漏洞分析(CVE-2018-5240)

当我们拿到了AeXNSAgent.exe的完整控制权之后,我们就可以替换服务代码,然后重启主机来获取SYSTEM权限了。

总结

Altiris Management Agent v7.6, v8.0和8.1 RU7均会受到该漏洞的影响,我们强烈建议大家尽快升级更新自己的软件。

如果大家还有利用该漏洞的新姿势,欢迎大家在下方评论区踊跃讨论。

* 参考来源: nettitude ,FB小编Alpha_h4ck编译,转载请注明来自CodeSec.Net

警惕!新型“撒旦”病毒来袭,360率先支持解密 “基因”编辑之后,Satan病毒离降维打击 ...

$
0
0

基因编辑,对于人类来说,可谓是“潘多拉魔盒”,是悬在人类头顶的“达摩克利斯之剑”。毕竟这祖传的基因,不是你想改造就改造,想加buff就加buff的。

然鹅,对于撒旦(Satan)这样的勒索病毒来说,编辑“基因”,算不上什么神仙操作。过去几天,360互联网安全中心监测到Satan勒索病毒进行了一次更新。


警惕!新型“撒旦”病毒来袭,360率先支持解密 “基因”编辑之后,Satan病毒离降维打击 ...

很显然,黑客为了“勒索事业”的年底冲刺,又双S

用R分析光荣《三国志》系列人物数据

$
0
0
前言

写这篇文章有两个原因,第一个是最近在看吴秀波演的《军师联盟》,这部剧剧情紧凑,演员演技精湛,有很多令人惊艳的细节, 再一次勾起了我对三国的兴趣。从小到大玩过不少三国游戏,看过很多三国的书,电视剧如央视版三国,高希希版新三国也都不在话下,而这些年除了偶尔玩玩《三国志10》并没有再对三国有什么研究, 想通过这个分析再重温下三国里的那些人物和故事。第二个原因是自己有比较长一段时间没怎么写R, 工作上用到R的机会也不是很多,手有点生,打算用这个数据练练手,加之这是个中文数据,以前没有过用R处理中文数据的经验。

人物数值系统一直是光荣历史游戏的一个特点,它是光荣结合史实,小说,野史等资料对人物的一个全面评价。光荣公司的《三国志》系列可以说是最经典的三国游戏,从1985年第一代推出以来,到现在已经有了十三代作品。而每代出场的数百个武将,经过这么多个版本,他们的各项属性是否会有什么大的变化呢,而这些变化反映了什么呢,这也是这次分析主要想研究的。

我用的武将数据是一个台湾网友(cws0324@yahoo.com.tw)收集制作的,它包含了三国志 1-11 所有的武将数据。给人物以数值化好处自然不言而喻,它可以让我们对游戏中武将的强弱有个直观的了解,但坏处就是我们有时候只看重数值,而忽略了该人物在历史上真实的一面,这问题在日本战国人物里更严重些。E.g. 伊达政宗,竹中半兵卫。想研究的话请参考

为什么日本战国时期有很多名将? (第一个回答把我笑死了)

国内非专业圈中(民科?)日本战国时代历史学习的氛围和现状有感

马伯庸在知乎一个问题的回答里做过类似的分析,他用的工具是Excel, 有兴趣的可以去看看. 光荣公司《三国志》游戏里的武将设定,是按照三国历史设计的吗?

我还用该数据+shiny做了一个三国志人物数据查询工具, https://nathanpan.shinyapps.io/RoTC-Searching/

由于自己也是对演义了解的更多,所以分析的时候更多会以演义的角度,再结合史实。话不说多,开始我们的分析。

1. 数据处理 首先看下这份数据本身的格式。
用R分析光荣《三国志》系列人物数据

我们发现有多个相同的变量分部在不同的列,而且版本信息占据了多个格子。这种数据不手动处理很难读进R(可能可以?),在excel里也许能直接用,但在R里这不属于我们所说的 干净数据 。干净数据有如下定义

每一个变量必须有它自己的列 每一个样本必须有它自己的行 每一个数值必须在它自己的格子 我决定自己手动把版本数据分开到11个表格里,每一个如下图所示。(自然我也可以手动直接得到我们最终想要的数据,但我还是决定用R实现它。)
用R分析光荣《三国志》系列人物数据

初步处理过后的数据我放到了 这里

这些是我们需要用到的package.

library(readxl) library(dplyr) library(data.table) library(ggplot2)

为了能在R里使用中文,我们用下面的代码将系统locale设置为”Chs”, 这里用的操作系统是Win10家庭版。

Sys.setlocale('LC_ALL','Chs')

接着用 readxl::read_excel 读取数据,每一个版本的数据分别存在了1个data frame里,我们总共有11个data frame,而这11个data frame又都再存在一个list里,因为 lapply 返回的就是list结构,我们给它命名为 dt .

dt <- lapply(1:11, function(x) read_excel("Characters.xlsx", x))

由于每一代武将所拥有的属性不一样,为了后边方便,我希望能做一个大的data frame,它的变量有姓名,所有版本的属性以及该人物出场的版本。下面我们来看如何达到这个目的。

我先把每一个版本的数据移除不需要的变量和那一个版本没有出现过的武将,部分NPC各项属性全都为0,我们也将它去掉,把清理过后的数据存到一个新的变量里,叫做 series 。

# Column 2-8 为不需要的变量 series <- lapply(1:11, function(x) {select(dt[[x]], -c(2:8)) %>% filter(complete.cases(dt[[x]][,-c(2:8)])) %>% mutate(版本 = paste0("三W志", x)) %>% filter(智力!= 0) }) # 通常一项属性为0,其它属性也都为0

第一代前六个武将是如下几位。

head(series[[1]]) # A tibble: 6 x 7 姓名

曲速未来 消息:Oracle数据库勒索病毒死灰复燃

$
0
0

曲速未来 消息:Oracle数据库勒索病毒死灰复燃
2018-11-30 16:45 区块链 技术
曲速未来 消息:Oracle数据库勒索病毒死灰复燃
615 收藏

区块链安全咨询公司曲速未来表示:近日,Oracle数据库勒索病毒又活跃了,其实这并非新病毒,早在2年前,即2016年11月就发现了,期间沉寂了1年多,直到最近,该病毒突然呈现出死灰复燃之势。

一、事件背景

区块链安全咨询公司 曲速未来 表示:近日,Oracle数据库勒索病毒又活跃了,其实这并非新病毒,早在2年前,即2016年11月就发现了,期间沉寂了1年多,直到最近,该病毒突然呈现出死灰复燃之势。

中毒截图证明如下:


曲速未来 消息:Oracle数据库勒索病毒死灰复燃

早在5月28号就发现多起Oracle数据库被勒索的案例中,中毒之后数据库会显示如下勒索信息:


曲速未来 消息:Oracle数据库勒索病毒死灰复燃

提取到相应的样本之后,经过深入分析,确认该病毒是RushQL数据库勒索病毒,是由于使用了破解版的PL/SQL导致的。

二、技术分析

RushQL病毒样本是在现场提取的,该样本是一个PL/SQL自带的AfterConnect.sql自动运行脚本,此文件一般在官方PL/SQL软件中是一个空文件,而该样本提取自破解版PL/SQL是有实际内容的,如下图所示:


曲速未来 消息:Oracle数据库勒索病毒死灰复燃

脚本的关键代码,采用了 Oracle数据库专用代码加密工具wrap进行了加密:


曲速未来 消息:Oracle数据库勒索病毒死灰复燃

将该病毒脚本解密,其主要功能是创建4个存储过程和3个触发器,下面分别分析其功能。PROCEDUREDBMS_SUPPORT_INTERNAL


曲速未来 消息:Oracle数据库勒索病毒死灰复燃

以上DBMS_SUPPORT_INTERNAL存储器的主要功能:

如果数据库创建日期 > 1200 天之后则:

(1)创建并备份sys.tab$表的数据到表 ORACHK || SUBSTR(SYS_GUID,10)

(2)删除sys.tab$中的数据,条件是所有表的创建者ID 在(0,38)范围

(3)通过SYS.DBMS_BACKUP_RESTORE.RESETCFILESECTION清理掉备份信息

(4)通过DBMS_SYSTEM.KSDWRT在你的alert日志中写上2046次勒索信息

(5)抛出一个警告提示勒索信息


曲速未来 消息:Oracle数据库勒索病毒死灰复燃

以上DBMS_SYSTEM_INTERNAL存储器的主要功能:

如果当前日期 数据表(不含SYSTEM, SYSAUX, EXAMPLE)的最小分析日期 > 1200 天,且当前客户端程序进程名不是“C89239.EXE”,则触发警告提示勒索信息


曲速未来 消息:Oracle数据库勒索病毒死灰复燃

此存储过程主要功能是:如果当前日期 数据表(不含SYSTEM, SYSAUX, EXAMPLE)的最小分析日期 > 1200 天,则:1、删除所有名字不含“$”且名称不是 ORACHK || SUBSTR(SYS_GUID,10)的数据表;2、如果当前客户端程序进程名不是“C89238.EXE”(这个“C89238.EXE”推测是制作者搞了个KillSwtich,当收取赎金后,以这个进程名访问数据库将不会导致数据库进入异常,从而为恢复数据提供便利)则触发告警信息(见概述中的告警信息);


曲速未来 消息:Oracle数据库勒索病毒死灰复燃

以上DBMS_STANDARD_FUN9存储器的主要功能:动态执行PL/SQL脚本

三、解决方案

区块链安全咨询公司 曲速未来 表示: 从技术分析知道,病毒会根据数据库创建时间、数据库的数据分析时间等因素来判断是否发作,因此会有一个潜伏期,潜伏期病毒不发作时是没有明显症状的,这时该如何自查是否感染这种病毒呢?其实病毒感染数据库主要是通过4个存储过程和3个触发器完成,也即判断是否存在如下存储过程和触发器即可知道是否感染了RushQL病毒:存储过程DBMS_SUPPORT_INTERNAL存储过程DBMS_STANDARD_FUN9存储过程DBMS_SYSTEM_INTERNA存储过程DBMS_CORE_INTERNAL触发器DBMS_SUPPORT_INTERNAL触发器DBMS_ SYSTEM _INTERNAL触发器DBMS_ CORE _INTERNAL一旦不幸感染了这种病毒该如何处置?从技术分析知道,病毒删除数据是由存储过程DBMS_SUPPORT_INTERNAL 和 DBMS_CORE_INTERNAL来执行的,他们的执行条件分别是:1 当前日期 - 数据库创建日期 > 1200 天2 当前日期 数据表(不含SYSTEM, SYSAUX, EXAMPLE)的最小分析日期 > 1200 天当以上条件不满足时,病毒不会触发删除数据的操作,此时删除以上4个存储过程和3个触发器即可。如果前面的2个条件中任何一个满足,就会出现数据删除操作,下面给出应对措施:

措施一:(当前日期 - 数据库创建日期 > 1200 天) 且 (当前日期 数据表(不含SYSTEM, SYSAUX, EXAMPLE)的最小分析日期 <= 1200 天)(A) 删除4个存储过程和3个触发器(B) 使用备份把表恢复到truncate之前(C) 使用ORACHK开头的表恢复tab$(D) 使用DUL恢复(不一定能恢复所有的表,如truncate的空间已被使用)

措施二:(当前日期 - 数据库创建日期 > 1200 天) 且 (当前日期 数据表(不含SYSTEM, SYSAUX, EXAMPLE)的最小分析日期 > 1200 天)(A)删除4个存储过程和3个触发器(B)使用备份把表恢复到truncate之前(C)使用DUL恢复(不一定能恢复所有的表,如truncate的空间已被使用)

本文内容由 曲速未来 (WarpFuture.com) 安全咨询公司整理编译,转载请注明。 曲速未来提供包括主链安全、交易所安全、交易所钱包安全、DAPP开发安全、智能合约开发安全等相关区块链安全咨询服务。

本文为作者“区块链安全档案”,原创文章,转载时请保留本声明及附带文章链接。 内容仅供读者参考,并非投资建议,本网站将保留所有法律权益。


曲速未来 消息:Oracle数据库勒索病毒死灰复燃

Net Core security - NWebSec to the rescue!

$
0
0

A quick overview of securing a Net Core webapp using NWebSec and the web.config

First up, let's install NWebSec middleware from nuget via the package manager

PM> Install-Package NWebsec.AspNetCore.Middleware

For those of you (like me) who are a little rusty on security best practise, two of the general principles are:

Reduce attack surface (make it as hard as possible for potential attackers to glean information about your app) Restrict access (unless securely authorised)

The ingredients for a safe Net Core app broadly feed into these practises and include the following (non-exhaustive) list:

[HSTS] HTTP Strict Transport Security Header X-XSS-Protection Header X-Frame-Options Header [CSP] Content-Security-Policy Header X-Content-Type-Options Header Referrer-Policy Http Header Remove the X-Powered-By header to remove the additional information transferred by verifying the app tech [HPKP] HTTP Public Key Pinning Header

Let's take these one at a time!

[HSTS] HTTP Strict Transport Security Header

This is what it sounds like - force all comms to go through HTTPS! Using the .Preload() indicated below forces it from the first request.

app.UseHsts(options => options.MaxAge(365).IncludeSubdomains().Preload()); X-XSS-Protection Header

This response header prevents pages from loading in modern browsers when reflected cross-site scription is detected. This is often unnecessary if a site implements a strong Content-Security-Policy (spoilers!)

app.UseXXssProtection(options => options.EnabledWithBlockMode()); X-Frame-Options Header

Ensure that site content is not being embedded in an iframe on other sites - used to avoid clickjacking attacks.

app.UseXfo(options => options.SameOrigin()); [CSP] Content-Security-Policy Header

The content security policy essentially allows you to whitelist resource origins when the site is loaded. These policies are usually to do with server and script origins.

There are a heap of different ways you can configure this and they are very much dependent upon your requirements and what you need to load in and out. You can read more about your options in the handy Mozilla docs

An example would be:

app.UseCsp(opts => opts .BlockAllMixedContent() .StyleSources(s => s.Self()) .StyleSources(s => s.UnsafeInline()) .FontSources(s => s.Self()) .FormActions(s => s.Self()) .FrameAncestors(s => s.Self()) .ImageSources(s => s.Self()) .ScriptSources(s => s.Self()) ); X-Content-Type-Options Header

Blocks any content sniffing that could happen that might change an innocent MIME type (e.g. text/css) into something executable that could do some real damage.

app.UseXContentTypeOptions(); Referrer-Policy Http Header

This tells the site how much information to send along in the Referer header field (misspelt!). Default value is no-referrer-when-downgrade i.e. don't send any referrer data is we're downgrading security protocols and going HTTPS to an HTTP site.

This one depends a bit on your requirements, the options are listed in detail on Mozilla's dev site to help you make a decision. If you want to be super safe, then opt for:

app.UseReferrerPolicy(opts => opts.NoReferrer()); Remove X-Powered-By Header

Now let's make sure that we're not giving information away regarding the technology in use (i.e. ASP.NET). To do this, we'll remove the X-Powered-By header by adding to the web.config

<system.web> <httpRuntime enableVersionHeader="false"/> </system.web> <system.webServer> ... <httpProtocol> <customHeaders> <remove name="X-Powered-By" /> </customHeaders> </httpProtocol> </system.webServer> [HPKP] HTTP Public Key Pinning Header

This one is interesting and to do with the whitelisting certificates. There are couple of plugins you can use to facilitate this and it's covered comprehensively in @JoonasWestlin blog here

Further links/reading: A good tool to test the security headers is using Geek Flare and a wealth of easy to digest information for general .NET security best practise is available at OWASP.org

This is just a quick point of reference to get started on Net Core site (mostly header-based) security - what's missing? Other recommendations?

Threat Hunting: Improving Bot Detection in Enterprise SD-WANs

$
0
0

How security researchers tracked down Kuai and Bujoi malware through multiple vectors including client type, traffic frequency, and destination.

For over a year, security researchers at Cato Networks have observed a trend occurring across SD-WANs that relates to unidentified malware in the enterprise. This malware continues to persist despite the investment in antivirus (AV) and other preventative systems. Below are two examples. Let's take a closer look to better understand how to protect your network.

Case #1: Kuai

In the following example, we identify a new malicious bot that we call "Kuai." To clarify, although the term "bot" is commonly used in a way that's synonymous with malicious intent, in fact, bots are also legitimate networking elements, such as an OS updater. As someone concerned about the security of your SD-WAN, you need to distinguish between the two. We have found that malicious bots can be identified by looking at multiple vectors―in this case, the client type, the traffic frequency, and the destination.

The first sign that this is a malicious bot is the client. Our researchers use machine learning algorithms to analyze network flows across the Cato Cloud network. By studying network flows, the researchers identify whether traffic originates from a browser, a bot, or other types of clients, and then "guess" at the exact client―for example, in the case of a bot, the type of bot, such as an OS updater or a python/Ruby client. In this case, we identify the client as a bot of type "unknown."

Next, we notice the shape of the client's traffic flow. We measure traffic frequency over time, providing multidimensional insight into a traffic flow. Periodicity and traffic patterns help determine whether the traffic is initiated by a human or a machine. As you can see by looking at the communication graph (Figure 1), the activity is consistent and uniform. Human-generated traffic tends to vary over time while machine-generated traffic tends to be almost uniformly distributed, like this graph.


Threat Hunting: Improving Bot Detection in Enterprise SD-WANs

Figure 1 - Periodic communication is one indicator of bot-like C&C traffic.

Notice the destinations. The IP addresses reside in three autonomous system numbers ― AS4837, AS4808, and AS134420 ― all of which are based in China, an originating point of many malicious bots. The URLs are also marked by low reputation (not shown). This is different from most threat-hunting or AV systems where the URL generally would be marked "malicious" using one of the third-party feeds available on the market.

Our experience has been that such feeds often include too many false positives and fail to accurately categorize new URLs. What's more, attackers can use the services' APIs to game them. Instead, we developed a popularity model that ranks URLs by the likelihood of posing a threat. The model analyzes the millions of network flows traversing our networks, flows involving many domains and clients. The model then ranks domains; the lower the reputation, the higher the risk.

Together, the three elements of client type, the destination, and traffic frequency lead to the identification of the malicious bot, Kuai. It's important to note that most AV software, even next-generation AVs relying on machine-learning models rather than file signatures, fail to identify Kuai. According to VirusTotal, a Google service that scans files by multiple AVs, only six out of 68 AV engines considered this file a true threat.


Threat Hunting: Improving Bot Detection in Enterprise SD-WANs

Figure 2 - VirusTotal screenshot, reveals a low detection rate of the threat

Case #2: Bujo

In our second case, we identify a new bot from a Chrome extension. The Bujo bot (named after the destination domain, bujot.com) again exhibits periodic communication but this time to a parked domain bujot.com. Upon investigation, we see that this domain is registered without any association to a web service . The traffic reveals that the domain was generated by Chrome extension (user agent below), an extension source not found on the Chrome web store.


Threat Hunting: Improving Bot Detection in Enterprise SD-WANs

Figure 3 - Periodic bot-generated communication of Bujo.

Further analysis of a Bujo sample reveals a fraudulent network monetizing a major search engine vendor. And once again, we see very few network-based, preventative solutions can detect Bujo. According to VirusTotal, only four of the 68 AV engines tagged Bujo as malicious.


Threat Hunting: Improving Bot Detection in Enterprise SD-WANs

Figure 4 - Low detection rate of Bujo as reported by VirusTotal.

Prevention? Detection? Response? You Need All of Them

Prevention mechanisms are designed to prevent infection attempts in real time. Yet malware is evasive and every day we witness new types of scams or techniques that manage to evade AVs. It's a cat-and-mouse game where AV vendors produce very large databases with malicious file signatures and attackers work to get around them.

All too often, though, when malware is less common or not widely distributed, AVs come late to the game. As a result, machines end up infected by threats detectable when observing network communications with command and control servers. Even more advanced engines, relying on machine learning rather signatures, often fail to detect these threats. Organizations simply cannot rely solely on AV to protect from Internet-borne threats.

Indicators of Compromise (IOCs)

Here are the known C&C domains used by the Bujo and Kuai bots.

Table 1: Indicators of Compromise (IOCs)

Here are the known C&C domains used by the Bujo and Kuai bots. Kuai abckantu[.]com Bujo bujot[.]com nusojog[.]com rokuq[.]com focuquc[.]com tawuhoju[.]com qukusut[.]com sastts[.]com tocopada[.]com norugu[.]com pacudoh[.]com srchlp[.]com

Related Content:

7 Ways an Old Tool Still Teaches New Lessons About Web AppSec Battling Bots: How to Find Fake Twitter Followers Security Researchers Struggle with Bot Management Programs
Threat Hunting: Improving Bot Detection in Enterprise SD-WANs
Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and

This Week in Security News: Ethics and Law in the Dark Web

$
0
0

This Week in Security News: Ethics and Law in the Dark Web

Welcome to our weekly roundup, where we share what you need to know about the cybersecurity news and events that happened over the past few days. This week, learn how Trend Micro software can aid in safely securing containers on the AWS Cloud . Also, how the dark web has become a new advertising medium for practitioners of law.

Read on:

Securing Containers in The AWS Cloud with Trend Micro

Dynamic environments require security that integrates with CI/CD pipelines, provides runtime protection for Docker and Kubernetes, and protection for inter-container traffic.

Middle East, North Africa Cybercrime Ups Its Game

Ransomware infections increased by 233% this past year in the Middle East and North Africa as part of a shift toward more savvy and aggressive cybercrime operations in a region.

Today’s Data Breach Environment: An Overview

Leveraging data from Privacy Rights Clearinghouse, Trend Micro researchers discovered that overall, there has been a 16 percent increase in mega breaches compared to 2017.

DoJ Takes Down Online Ad Fraud Ring, Indicts 8

The U.S. Department of Justice revealed an unsealed indictment of eight defendants for crimes related to their involvement in widespread digital advertising fraud.

Water and Energy Sectors Through the Lens of the Cybercriminal Underground

As organizations in critical sectors (CI) like water and energy continue to incorporate the industrial internet of things (IIoT) in their operations, they should start with security in mind.

Atrium Health Data Breach Exposed 2.65 Million Patient Records

Atrium Health has revealed a data breach which exposed information belonging to roughly 2.65 million patients.

Uncovering the Truth About Corporate IoT Security

Trend Micro looks at IoT projects being driven by global organizations, their key challenges and perceived threats, and hard data outlining the frequency and type of attacks they’ve experienced.

Uber Fined Nearly $1.2 Million by British and Dutch Authorities for 2016 Data Breach

Uber was fined a combined $1.17 million by British and Dutch authorities for a 2016 data breach and cover-up that exposed the personal details of millions of customers.

Automating Security, Continuous Monitoring, and Auditing in DevOps

DevOps entails pivotal shifts, like the way monitoring and auditing are carried out. As requirements deploying applications change, the requisites for monitoring and auditing also change.

AWS Doubles Down on Containers, Launches MicroVM Manager

At re:Invent, AWS announced its new Container Competency Program and the addition of 160+ new container-based products to its Amazon Marketplace software catalogue.

Ethics Need Not Apply: The Dark Side of Law

In the course of Trend Micro’s research, we saw that lawyers were offering legitimate legal advice in matters related to family law, criminal law, real estate law, and business.

Did the findings of Privacy Rights Clearinghouse surprise you? Why or why not? Share your thoughts in the comments below or follow me on Twitter to continue the conversation: @JonLClay.

收个Word文档也会丢币?如何拯救被30万黑客盯上的钱包

$
0
0

在加密货币的世界里,你的资产就是链上的一串代码。

这里,是黑客敛财的新天地。据网络安全公司Carbon Black最新调查数据的调查数据显示,2018年上半年,有价值约11亿美元的数字加密货币被盗。2018年下半年,这个数字还在不断攀升。

而加密货币钱包,是黑客们肆虐的重点领域之一。

无论是热钱包、冷钱包,中心化、去中心化钱包,云端钱包、HD钱包,只要有利可图,都能在其中看到黑客狡黠的身影。

如何保卫被黑客盯上的钱包,将是一场漫长而残酷的战斗……

猖狂的黑客和脆弱的系统

“2017年年中出现大量有计划性针对性的黑客。”鱼池创始人、Cobo CEO神鱼告诉记者,“据说有30万名黑客在针对区块链的各个方面做攻击跟薅羊毛的事情。”

而加密货币钱包,是黑客攻击的重灾区之一。

11月27日,一名黑客攻破了javascript库,并在库中注入恶意代码,妄图窃取BitPay、Copay钱包存储在其中的比特币和比特币现金。

被波及的BitPay与Copay,是全球知名的加密货币钱包品牌。BitPay在今年4月份获得了4000万美金的B轮融资,2017年支付总额接近20亿美元。

BitPay与Copay受到的威胁绝非个案,如今,黑客对钱包的攻击已经“肆无忌惮”。

今年7月份,由以太坊前CTO盖文伍兹操刀的加密货币钱包Parity发布了安全警报,称有153000ETH(大约价值3000万美元)被盗。

2017年,在钱包方面,被黑客攻击、损失金额超过千万美金级别的就有6起。而与黑客疯狂的攻击相对应的,是目前加密货币钱包脆弱的安全技术。

据不完全统计,目前市面上有上千家钱包产品,但真正有安全技术的,并不多。

“钱包是一个非常重安全的事情,但用户对钱包并不是很了解。”神鱼表示,“很多人拿开源代码随便改改,一两个人一两周做出来的,底层安全几乎在裸奔。”

在今年,钱包领域迎来一个创业的高峰,也有大量“简单粗暴”的钱包产品涌现,“开源代码拿过来改一改,UI做的漂亮一点,功能做的简单粗暴一点”,用户一旦使用,就等于把自己的资产放在一个极度不安全的环境中。

在今年5月,360集团信息安全部发布了《数字货币钱包安全白皮书》,白皮书称,目前,市场上最为主流的20多款钱包中,有八成存在安全隐患。

“这样的安全现状,导致很多黑客都把区块链都当成靶场了。一旦加密货币钱包让他们攻破,资产丢了几乎没法找回,所以黑客们就把区块链当成了‘取款机’。”区块链安全公司的技术负责人赵伟认为。

正是如此低的进入门槛,导致这个钱包这个新兴领域变成了黑客的“屠宰场”。

消失的秘钥

除了直接攻击钱包的安全团队,黑客还把目光放在普通的用户身上。

使用过加密货币钱包的用户都了解,私钥对应的“助记词”,是一款钱包安全的关键。如果不是“中心化”的钱包,助记词丢失,意味着钱包再也找不回来;相应的,如果助记词被黑客盗用,就等于钱包被黑客攻破。

“其实这个世界上80%的人没有这个能力管理随机数(与私钥强相关)的。”神鱼认为,“一旦被盯上,随随便便一个木马就会导致你的数字资产荡然无存。”

举个最简单的例子,很多人在手机上使用了第三方的输入法,只要有“输入”、“备份”操作,你输进去的词就是不安全的,已经被上传到第三方数据库里。

如果你把助记词“复制”到其他地方,只要进了粘贴版,就被各种软件360度监控,黑客稍微动些心思,就能找到。

神鱼就曾和“黑客”有过一次惊心动魄的交锋。

2015年,正在维护自家矿池的神鱼,收到了一家“国外知名媒体”记者的采访邀请,简单的寒暄后,神鱼欣然答应了这名记者的采访。

从矿池到矿机,从加密货币行业现状,到未来区块链发展趋势,这位“记者”的问题涉猎广泛,并且相当专业。

“当时根本丝毫没有怀疑。”神鱼告诉记者。

采访持续了一个月。一个月后,“记者”将采访内容整理出稿,给神鱼发了一份Word文档请求确认。

“我一般是不收Word文档的,因为Word文档很容易被植入木马,而PDF却相对安全的多。”神鱼解释道。他当时要求“记者”把Word文档转换成PDF格式,但对方却说“不方便”,百般推辞,就是不发PDF文档。

这个细节,让神鱼突然警惕起来。他找了台新的电脑,将这份文档转入虚拟机(对整个电脑而言,是相对安全的封闭环境),结果发现,这份Word文档确实有问题。

如果当时在常用的电脑上打开了这份文件,会被立刻植入木马,电脑上所有数据会一点一点泄露,数据和资产都面临风险。

“我又跟他聊了下,最后发现他是一个黑客。”神鱼说,“只要你值得被盗,黑客就愿意花心思。”

除了针对一人的黑客攻击,对于普通用户而言,黑客也会无差别大范围的攻击。

2018年3月20日,慢雾科技发现了臭名昭著的“以太坊情人节”攻击。这次黑客可以说是“雨露均沾”,对以太坊钱包进行了持续数年攻击,敛财高达数千万美金。

这一切要从2016年2月14日说起。

情人节这天,黑客通过一组可以直接从以太坊钱包自动转账的命令“ETH_sendTransaction”,第一次在以太坊网络上盗取了26.7个以太坊。

之后,这个盗窃方式在以太坊全网蔓延。

首先,黑客通过在全球扫描开放的RPC API接口的以太坊节点,有了这个接口,黑客便可以检测以太坊区块高度、钱包地址及余额。

然后,黑客通过不断调用ETH_sendTransaction命令,尝试将地址中的余额转到黑客事先准备好的钱包。

如果此时恰好遇到用户从自己钱包退出,部分浏览器会默认在300s内无需用户再次输入密码。此时黑客的ETH_sendTransaction命令将会发挥作用,将钱包中的资金悉数卷走。

慢雾已经梳理了目前正在进行类似攻击的IP地址,共有31个。

五花八门的物理攻击

事实上,在网络攻击之外,还有其他大规模针对加密货币的“物理攻击”。

有着“维京海盗”传奇历史的北欧,近些年发生了数起针对高净值加密货币持有者的“入侵绑架”事件。

刀架在脖子上,当然要什么给什么。

2017年,著名的加密货币交易所BTC-e,因为涉及洗黑钱等问题,导致FBI直接冲进了他的IDC机房,抱走了服务器。

结果,整个交易所用户的比特币,都存在那几台服务器里,用户最后亏损了三分之二的比特币。

除此之外,还有专门针对硬件钱包的“供应链攻击”。

神鱼解释道,所谓“供应链攻击”,是指在硬件钱包出厂到客户手上这段距离上,黑客对硬件钱包动了手脚,钱包本身的随机数就不再随机,黑客就有可能攻破钱包,进而将客户资产转走。

这并不是危言耸听。

2018年1月份,网友moodyrocket,在eBay购买了一个二手硬件钱包,随后将3.4万美金加密货币悉数转入了这个二手钱包中。

一周之后,当moodyrocket再次打开这个钱包时,却惊讶的发现“余额为零了”。

原来,moodyrocket是从黑客手中购买了这个二手钱包,而黑客预先在设备中写入了其自己设置的复原码,并没有使用硬件钱包厂商Ledger公司提供的随机码。

被认为最安全的“硬件钱包”,也被攻陷了。

守卫钱包安全

黑客的盗取钱包的手段五花八门,从直接破解随机算法,到读取电脑粘贴板数据、破解电脑信息,再到更改硬件钱包随机数,甚至配合犯罪集团进行人身攻击,“只有想不到,没有黑客办不到”。

面对互联网上花样繁多的黑客盗窃方法,钱包创业者要如何应对?

首先,钱包团队本身要把安全技术放在首位。

目前有很多开源系统,但开源系统是存在许多安全漏洞的,“需要开发人员重新把架构优化”,神鱼表示。

这意味着,对不同的加密货币,也要有不用的安全保障机制,这往往需要几个月的准备和测试。

“举个例子,比如某些加密货币采用了非标准的签名算法,那在上币前首先需要把签名算法实现,然后联系硬件厂家,在硬件里面烧录,然后通过安全公司审计等一系列流程。”神鱼解释道。

其次,对于不同用户,也可以用不同的安全策略。

比如一些小白用户,对“存储私钥”的重要性并没有太深的理解,这时候,可以在设计流程上,尽量不让用户接触核心逻辑。

“比如导出私钥、导入私钥这些功能,直接不加。”神鱼认为,“因为用户自己导入导出私钥,被黑客盯上之后,资产很容易被窃取了。”

而在私钥保存方面,记载小纸片上的信息很容易损坏或者丢失,可以使用“金属助记词板”,自己拼出来,然后放在安全的地方。

目前,黑客主要还是通过线上的手段来进行攻击,相对的,硬件钱包会比较安全。

而保证硬件钱包安全,则要做到两个方面:第一,是有一颗安全的加密芯片;第二,是要有“供应链攻击检测”,检测钱包从生产再到用户手里,是否被外界植入其他信息。

团队需要不停打磨自身的安全技术,对于用户而言,要如何保护自己的加密货币安全呢?

慢雾科技联合创始人余弦向记者推荐了用二手苹果手机做硬件钱包的方法,“可以用一台二手苹果手机上安装去中心化钱包,再将不相关功能重置,需要联网的话,就用4G网络。”

在做好这一切,也就极大降低了黑客盗窃盗取加密货币的可能性。

还有一个忠告是:持有大量加密货币资产的人,尽量保持低调。

“高调一些的人可能会被黑客盯上,黑客在微信群、QQ群,做点社工,高调的人就有可能被盯上,这就增加了被盗的可能性。”神鱼说道。

今年1月份,俄罗斯加密货币投资者Nyashin刚在网上炫耀了自己的加密货币财富,随后被盯上。在格勒州的家中,Nyashin遭到袭击和抢劫,袭击者偷走了价值2400万卢布(42.5万美元)的资金。Nyashin因此绝望,选择了自杀。

虽然并不是严格意义上的黑客盗窃,但这次暴力事件也足以提醒加密货币投资者“随时保持低调”。

因为需要在安全方面进行大量的投入,所以加密货币钱包,并不是一个轻松的创业领域。

“我们的判断是,钱包会从一个早期的管理私钥的工具,逐渐变成DApp等领域的入口。”神鱼认为,这个入口级的机会,可以诞生一家伟大的公司。

但是,在成为加密货币世界的入口之前,首先面对的,是黑客的枪林弹雨。守住安全的最后防线,是所有钱包创业者和用户都需要正视和应对的挑战。

New PowerShell-based Backdoor Found in Turkey, Strikingly Similar to MuddyWater ...

$
0
0

MuddyWater is a well-known threat actor group that has been active since 2017. They targetgroups across Middle East and Central Asia, primarily using spear phishing emails with malicious attachments. Most recently they were connected to a campaign in March that targeted organizations in Turkey, Pakistan, and Tajikistan .

The group has been quite visible since the initial 2017 Malwarebytes report on their elaborate espionage attack against the Saudi Arabian government. After that first report, they were extensively analyzed by other security companies. Through all that, we’ve only seen minor changes to the tools, techniques and procedures (TTPs) they have used.

However, we recently observed a few interesting delivery documents similar to the known MuddyWater TTPs. These documents are named Raport.doc or Gizli Raport.doc (titles mean “Report” or “Secret Report” in Turkish) and maliyeraporti (Gizli Bilgisi).doc (“finance (Confidential Information)” in Turkish) ― all of which were uploaded to Virus Total from Turkey. Our analysis revealed that they drop a new backdoor, which is written in PowerShell as MuddyWater’s known POWERSTATS backdoor. But, unlike previous incidents using POWERSTATS, the command and control (C&C) communication and data exfiltration in this case is done by using the API of a cloud file hosting provider.

The screenshots below show the malicious attachments, which are disguised to look real, similar to any typical phishing document. The images show blurry logos that we’ve identified as belonging to various Turkish government organizations ― the logos add to the disguise and lure users into believing the documents are legitimate. Then the document notifies users that it is an “old version” and prompts them to enable macros to display the document properly. If the targeted victims enable macros, then the malicious process continues.


New PowerShell-based Backdoor Found in Turkey, Strikingly Similar to MuddyWater  ...

Figure 1. Fake Office document tries to get user to enable malicious macros. The blurred document contains logos of different Turkish government entities


New PowerShell-based Backdoor Found in Turkey, Strikingly Similar to MuddyWater  ...

Figure 2. A similar fake Office document has blurred logos for a Turkish government institution related to taxes

The macros contain strings encoded in base52, which is rarely used by threat actors other than MuddyWater. The group is known to use it to encode their PowerShell backdoor.

After enabling macros, a .dll file (with a PowerShell code embedded) and a .reg file are dropped into %temp% directory. The macro then runs the following command:

“C:\windows\System32\cmd.exe” /k %windir%\System32\reg.exe IMPORT %temp%\B.reg

Running this registry file adds the following command to the Run registry key:

rundll32 %Temp%\png.dll,RunPow
New PowerShell-based Backdoor Found in Turkey, Strikingly Similar to MuddyWater  ...

Figure 3. Run registry key

We assume that RunPow stands for “run PowerShell,” and triggers the PowerShell code embedded inside the .dll file. The PowerShell code has several layers of obfuscation. The first layer contains a long base64 encoded and encrypted code with variables named using English curse words.


New PowerShell-based Backdoor Found in Turkey, Strikingly Similar to MuddyWater  ...

Figure 4. Encrypted PowerShell code

The other layers are simple obfuscated PowerShell scripts. But the last layer is the main backdoor body. This backdoor has some features similar to a previously discovered version of the Muddywater backdoor.

Firstly, this backdoor collects the system information and concatenates various pieces of information into one long string. The data retrieved includes: OS name, domain name, user name, IP address, and more. It uses the separator “::” between each piece of information.


New PowerShell-based Backdoor Found in Turkey, Strikingly Similar to MuddyWater  ...

Figure 5. String of system information collected from the victim’s system

The previous MuddyWater version collected similar information but used a different separator:


New PowerShell-based Backdoor Found in Turkey, Strikingly Similar to MuddyWater  ...

Figure 6. String of system information collected from the victim’s system, from older Muddywater backdoor sample

As mentioned above, another difference between this and older Muddywater backdoors is that C&C communication is done by dropping files to the cloud provider. When we analyzed further, we saw that the communication methods use files named <md5 (hard disk serial number)> with various extensions depending on the purpose of the file.

.cmd text file with a command to execute .reg system info as generated by myinfo() function, see screenshot above .prc output of the executed .cmd file, stored on local machine only .res output of the executed .cmd file, stored on cloud storage
New PowerShell-based Backdoor Found in Turkey, Strikingly Similar to MuddyWater  ...

Figure 7. Example of .cmd file content


New PowerShell-based Backdoor Found in Turkey, Strikingly Similar to MuddyWater  ...

Figure 8. Example of .reg file content


New PowerShell-based Backdoor Found in Turkey, Strikingly Similar to MuddyWater  ...

Figure 9.Example of .res file content

In both the older version of the MuddyWater backdoor and this recent backdoor, these files are used as an asynchronous mechanism instead of connecting directly to the machine and issuing a command. The malware operator leaves a command to execute in a .cmd file, and comes back later to retrieve the .res files containing the result of the issued command.

However, in the older MuddyWater backdoor their content was encoded differently. The files are temporarily stored on compromised websites. The more recent backdoor uses a legitimate cloud storage service provider instead.

The .res file can be decoded by replacing “00” with empty string, then converting from hex to ASCII, then reversing the string. The figure below is the decoded .res file from Figure 9.


New PowerShell-based Backdoor Found in Turkey, Strikingly Similar to MuddyWater  ...

Figure 10. Decoded .res file

The backdoor supports the following commands:

$upload upload a file to file hosting service $dispos remove persistence $halt exit $download download file from a hosting service No prefix execute command via Invoke Expression (IEX), a PowerShell command that runs commands or expressions on the local computer

Based on our analysis, we can confirm that the targets were Turkish government organizations related to the finance and energy sectors. This is yet another similarity with previous MuddyWater campaigns, which were known to have targeted multiple Turkish government entities. If the group is responsible for this new backdoor, it shows how they are improving and experimenting with new tools.

Solutions and Recommendations

The main delivery method of this type of backdoor is spear phish

Arlo is planning to launch a 4K smart home security camera next year

$
0
0

Arlo plans to launch a 4K security camera in 2019, potentially stealing a lead on competitors and kicking off a new race in thesmart home security market.

The Arlo Ultra camera looks similar to the company's existing battery-powered smart home security cameras and will stream 4K video to users and record it. That's a step beyond the IQ Outdoor camera from Nest, which uses a 4K image sensor but is only capable of streaming in high definition.Compared to HD, a 4K image has four times the resolution and that should mean extra clarity and more detail when users zoom in on images.

It will cost $400.

The camera also supports HDR (high-dynamic range), which attempts to even out differences in dark and bright areas of an image, as well as color night vision, the company said.

4K video requires more bandwidth and more storage space than HD, although Arlo didn't immediately say what speed of home Internet line would be required. An HD camera typically needs an upload speed of around 1Mbps so the Ultra will likely be in the 3-5Mbps range.


Arlo is planning to launch a 4K smart home security camera next year
Arlo

The Arlo Ultra camera, base station and accessories.

Users will be able to watch 4K video streamed live from the camera. A basic Arlo Smart Premier subscription ($120/year) will cover 30 days of cloud recording in high-definition but you'll have to pay an as-yet-undisclosed premium for cloud 4K storage.Users will also be able to pop an SD card into the Arlo base station for free local 4K recording and storage.

One of the reasons Arlo has scored so highly in TechHive's reviews and testing is a generous 1-week of free cloud storage, well beyond what major competitors offer. Arlo wasn't willing to commit to continuing the offer for 4K footage.

The Arlo Ultra also comes with a built-in LED light, but the company declined to say how bright it is or whether it compares to the company'srecently launched a set of stand-alone home security lights


Arlo is planning to launch a 4K smart home security camera next year
Arlo

The Arlo Ultra camera.

A new base station will be supplied with the camera, which will be compatible with existing cameras. So current owners will be able to swap it out without affecting their existing setup, according to the company.

Like current Arlo cameras, the Ultra runs on batteries and connects to the base station over a wireless link. That means no cables are required, offering ultimate flexibility in where the camera can be mounted. Arlo uses a magnetic mount to attach cameras to a wall. For the new Ultra,Arlo says the microphones and batteries have been upgraded.


Arlo is planning to launch a 4K smart home security camera next year
Arlo

The Arlo Ultra camera.

The Arlo Ultra is promised for the first quarter of 2019 and will be on display at the Consumer Electronics Show in Las Vegas in January.

This story, "Arlo is planning to launch a 4K smart home security camera next year" was originally published by TechHive .

Netgear's new Arlo Ultra security camera will monitor your home in 4K

$
0
0

Netgear has announced the Arlo Ultra, a 4K HDR wireless security camera that looks to be a serious upgrade over the Arlo Pro 2 . The biggest upgrade is in the resolution. This is Netgear's first Arlo security camera to have 4K resolution and HDR image processing. Netgear believes users will be able to use the resolution to pick up on critical details they may have otherwise missed, like license plate numbers or clothing, when identifying suspicious activity.

The camera will also come with a 180-degree field of view, automatic zoom and tracking, dual microphones, and more. The Ultra finally has a spotlight integrated into it. Previously, you had to buy the Arlo Smart Home Security Light

and sync it with your system. Other features include color night vision and custom activity zones you can set yourself. The two-way audio includes advanced noise cancellation to minimize background noise and allow clear, natural conversations from both sides of the camera.

Use the security camera inside or out. The new magnetic mounts allow you to attach the cameras anywhere you want, including walls, ceilings, or gutters. They come with weather-resistant charging cables so you can keep the batteries charged even outdoors.

You'll also get a one-year subscription to Arlo Smart Premier . This is a service that Arlo has continued to upgrade over time , so what you'll end up with is the best possible iteration of it. One year would cost $119.88 usually. With the subscription, you'll get access to features that naturally make your camera better, including computer vision technology that tells you exactly what triggered your cameras, direct contact to emergency services, custom alerts, Person Detection, Cloud Activity Zones, and more.

The Ultra will connect to the new Arlo SmartHub. This base station connects to your router, provides extended Wi-Fi range for Arlo cameras, and allows you to manage the camera's data traffic. The SmartHub can control multiple Arlo cameras, and it has a microSD card slot for local storage. You'll probably want a couple of large microSD cards to use with the hub so you don't have to keep absolutely everything in the cloud.

The Arlo Ultra will debut at $399.99 for the one-camera system that includes the new SmartHub. That does include the $120 value of Arlo Smart Premier, too, so it's not a bad price. Still, you could get started with two Arlo Pro 2 cameras at that price or go less expensive with the one-camera Arlo Pro . The Ultra should be available for pre-order sometime today on the Arlo site and available everywhere else in 2019.

This post may contain affiliate links. See ourdisclosure policy for more details.


Attackers Up Their Game with Latest NPM Package Compromise

$
0
0

The software supply-chain attacks targeting development ecosystems and package repositories like npm are getting increasingly sophisticated. In the latest incident, an attacker combined social engineering with dependency abuse to backdoor a package with 2 million weekly downloads.

It all started a week ago when someone noticed that a package called flatmap-stream, a dependency of event-stream was injecting some AES-encrypted code. Event-stream is a toolkit that helps developers create and work with streams in Node.js more easily. It is used by almost 1,600 Node packages and gets downloaded around 1.8 million times per week from the npm registry.

Flatmap-stream was added as a dependency to event-stream back in September; not by the original maintainer, but by a user who received publishing rights to the package. It seems that this user, using the handle right9ctrl (now suspended on GitHub and npm), managed to convince the original author, Dominic Tarr , to transfer the package to him after making a few legitimate code contributions.

“He emailed me and said he wanted to maintain the module, so I gave it to him,” Tarr said in a discussion on GitHub . “I don’t get anything from maintaining this module, and I don’t even use it anymore, and haven’t for years.”

People who analyzed the malicious code concluded that it only injected its payload if the environment contained a library called copay-dash that’s part of Copay , a secure Bitcoin and Bitcoin Cash wallet platform for desktop and mobile devices. This means the attack was designed to target only Copay developers, who would have that library in their environments, with the intention of poisoning the official builds of the application that would then be distributed to users.

The final payload injected into Copay was designed to steal users’ private keys for wallets that contained more than 100 Bitcoins or 1,000 Bitcoin Cash and send those keys along with account details to a remote server controlled by hackers.

In conclusion, this was a highly targeted multi-stage attack where one application was compromised through a backdoored dependency higher up in the supply chain. To understand its complexity, here is how the npm security team described the attack chain in a post-mortem analysis :

The injected code:

Read in AES encrypted data from a file disguised as a test fixture; Grabbed the npm package description of the module that imported it, using an automatically set environment variable; Used the package description as a key to decrypt a chunk of data pulled in from the disguised file
The decrypted data was part of a module, which was then compiled in memory and executed.

This module performed the following actions:

Decrypted another chunk of data from the disguised file; Concatenated a small, commented prefix from the first decrypted chunk to the end of the second decrypted chunk; Performed minor decoding tasks to transform the concatenated block of code from invalid JS to valid JS (we believe this was done to evade detection by dynamic analysis tools); Wrote this processed block of JS out to a file stored in a dependency that would be packaged by the build scripts.

The chunk of code that was written out was the actual malicious code, intended to be run on devices owned by the end users of Copay.

This code would do the following:

Detect the current environment: Mobile/Cordova/Electron; Check the Bitcoin and Bitcoin Cash balances on the victim’s copay account; If the current balance was greater than 100 Bitcoin, or 1000 Bitcoin Cash: Harvest the victim’s account data in full, Harvest the victim’s copay private keys; Send the victim’s account data/private keys off to a collection service running on 111.90.151.134.

The Copay developers confirmed that versions 5.0.2 to 5.1.0 of the application contained the malicious code. The npm security team took ownership of the event-stream package and removed version 3.3.6 from the registry, as well as the malicious flatmap-stream dependency.

“For npm users, you can check if your project contains the vulnerable dependency by running npm audit,” the team said. “If you have installed the impacted version of this event-stream, we recommend that you update to a later version as soon as possible.”

This incident highlights how hard it is to detect and defend against these attacks, especially in huge ecosystems like npm where there are hundreds of thousands of packages with millions of interdependencies. The malicious code was added in September and was not noticed for two months, more than enough time for hackers to achieve their goals.

The attack also shows that it’s not even necessary to compromise a developer’s machine in order to inject malicious code into a package. Finding devs who are willing, even eager, to give away packages written a long time ago, is probably not that hard. Tarr, for example, is the author of over 400 packages hosted on npm and he’s likely not the only developer who no longer has the time or interest to maintain some of his old creations.

There have also been cases in the past where attackers posed as companies and bought WordPress plug-ins or Google Chrome extensions from their original developers for significant amounts of money. They then added rogue code to those components in order to inject ads into websites or browsing sessions. So paying for access is another option that attackers have.

End users have very few ways to detect such attacks, but developers should make use of all the existing technologies to make life harder for attackers. For example, according to Thomas Hunter II of application security firm Intrinsic, the Copay developers could have taken steps that would have blocked the final payload.

“The attack could have been prevented by making use of CSP (Content Security Policy),” he said in a blog post about the incident . “This is a standard for specifying which URLs a webpage can communicate with and is specified via web server headers. Cordova [a framework used by Copay] even has its own mechanism for specifying which third-party services can be contacted. However, the Copay application appears to have disabled this feature.”

“These supply-chain attacks are only going to become more and more prevalent with time,” Hunter said. “Targeted attacks, like how this package specifically targets the Copay application, will also become more prevalent.”

Feature image: “ Match Fire Factory ” (1954), New Old Stock.

Starwood hacked with over 500 million customer details accessed

$
0
0

If you’ve stayed at a Starwood hotel in the past few years, it’s time to buy some credit monitoring. The Marriott International-owned hotel brand has reported a massive hack that saw the details of over 500 million customers accessed by an unauthorized party.

The hotel chain says that the attackers have been able to access the company’s internal network ― including the guest reservation database ― since 2014.

Blockchain and cryptocurrency news minus the bullshit.

Visit Hard Fork.

CLICK IT

For 327 million unlucky customers, the data accessed includes sensitive personal information like address details and passport numbers.

Marriott International, which acquired Starwood in 2016, says the information also contains payment card information,. This was saved in an encrypted form, but the firm could not rule out the possibility that the hackers had also made off with the encryption keys.

In a statement, the company apologized for the incident, and said it has reported the incident to the relevant law enforcement and regulatory authorities.

Commenting on the hack, Tom van de Wiele , security consultant at F-Secure, bemoaned the fact that it took Marriott over four years to detect the breach.

“The most disappointing part of this hack is the fact that the amount of data stolen is one of the bigger ones of the last few years and further made worse by the fact that the compromise had been going on for at least four years according to several online publications. This indicates that as far as security monitoring and being able to respond in a timely and adequate fashion, Marriott had severe challenges being able to live up to its mission statement of keeping customer data safe,” he said.

Security experts are recommending that Starwood customers contact their banks for a replacement credit card, and to start monitoring their credit history for fraud.

“Although it might be a nuisance, affected customers should contact their credit card company to disable their compromised card, create a new account and order a replacement. By now, I am sure we have all had to do this. In addition, those people will need to begin (or continue) monitoring their credit history,” said Bill Evans , senior director at One Identity.

Fortunately, for customers in the UK, Canada, and the United States, there’s some good news on that front. Marriott is offering a year’s subscription to the WebWatcher fraud protection service. To find out how to sign up, click here .

Read next: Apple Music arrives on Amazon Echo next month

3 Steps to Get on the Right Side of GDPR Compliance

$
0
0

American small businesses may not have paid much attention when the European Union finalized its new data privacy law. Many assumed the General Data Protection Regulation (GDPR) applies only to European businesses, or at least those doing significant business there. In fact, though, GDPR applies to virtually every business that processes any personal data on any EU citizen or resident.

The GDPR is a lengthy, dense legal document with hundreds of clauses to understand and address. Compliance can seem daunting for many small and medium-sized businesses, but a violation can reap serious consequences. A major breach of personal data, for example, can subject businesses to large fines, whether or not the breach was the fault of the business. The good news: A few small steps can put most U.S. small businesses on the right side of GDPR compliance.

MORE FROM BIZTECH: These tools can help shield your organization from a cyberattack .

1. Determine How GDPR Applies to the Business

It’s wise to seek legal advice from someone who is well versed in both GDPR and U.S. data privacy laws. The key item to discuss is what types of personal data the business collects. Even data that’s not considered personally identifiable information in the context of U.S. law may be relevant to GDPR. Something as simple as developing an email database to send people a company newsletter could fall under GDPR, because an EU citizen or resident could sign up.

In some cases, a business may determine that a few simple, inexpensive changes to how it manages data can free it from GDPR compliance requirments entirely. The deluge of privacy notices popping up on websites and terms-of-service updates arriving in people’s inboxes are examples of companies making quick updates to comply with GDPR.

2. Conduct an Independent GDPR Compliance Assessment

After determining how GDPR applies to the business, the next step is to discover where the business is already in compliance and where it isn’t. The best approach is an independent compliance audit. At minimum, an auditor should review the business’s privacy policy (and other policies related to personal data collection), prepare a report indicating any potential compliance issues and provide prioritized recommendations on how to address each issue. A more thorough audit would also review the business’s processes, technologies and other mechanisms used to collect, process and safeguard personal data.


3 Steps to Get on the Right Side of GDPR Compliance

Some companies, particularly those that have recently had a similar assessment for PII protection, may find that their greatest need is to understand what their privacy policies should include to accommodate the additional data covered by GDPR, assuming they can then propagate policy changes to the relevant procedures.

But a business’s policies are always at the root of everything else it does. At the least, it should have an outside party with strong GDPR knowledge assess those policies. This should give the business the greatest benefit for the least expense.

3. Address Data Protection Deficiencies

In addition to updating its privacy policy and privacy notices to be GDPR-compliant, there are many other things a business may have to do to achieve and maintain compliance . One example is strengthening and expanding mechanisms to handle privacy-related requests from people whose data has been collected, such as to correct errors or to delete the data altogether.

Another example: ensuring all content delivered to people electronically, such as an email newsletter, is on an opt-in basis.

A business may also modify its breach detection and reporting processes to comply with GDPR requirements, train staff on their roles and responsibilities under GDPR, and strengthen the technical controls the business deploys to protect personal data. Several technologies can help businesses with compliance, such as data encryption (including full-disk encryption solutions); multifactor authentication and privileged access management solutions; and server security technologies.

Finally, as part of addressing deficiencies, it’s critically important that a business also coordinate with all third parties handling personal data on its behalf. These vendors must also be GDPR-compliant; a business could be held liable if a third party acting on its behalf violates GDPR. It’s vital that contracts and other agreements with those businesses be updated to require this compliance and to establish processes on such things as how potential breaches of the personal data will be handled and reported.

Many small businesses have been confused by GDPR since the regulation took effect in May. And while it is intimidating at first, it doesn’t need to be. Following these steps can help a business determine what it needs to do and make progress toward being compliant.


3 Steps to Get on the Right Side of GDPR Compliance

The Marriott Hack: How to Protect Yourself

$
0
0

Early Friday morning, the hotel behemoth Marriott announced amassive hack that impacts as many as 500 million customers who made a reservation at a Starwood hotel. Marriott acquired the Starwood hospitality group in September 2016, which operates numerous hotel brands including Sheraton, Westin, Aloft, and W Hotels. But the intrusion that caused the enormous data breach predates Marriott's acquisition, beginning in 2014.

Marriott says it is cooperating with law enforcement and regulators in investigating the hack, and the company hasn't finalized the number of people impacted. It currently seems that about 170 million Marriott customers only had their names and basic information like address or email address stolen. But the bulk of the victims―currently thought to be 327 million people―had different combinations of name, address, phone number, email address, date of birth, gender, trip and reservation information, passport number, and Starwood Preferred Guest account information all stolen.

"Four years is an eternity when it comes to breaches."

David Kennedy, TrustedSec

Some credit card numbers were also stolen as part of the breach, Marriott says, but the company did not provide an initial estimate of how many were taken. The credit card numbers were encrypted with the algorithm AES-128―a reasonably robust choice―but Marriott says the attackers may have also compromised the decryption keys needed to unlock the data.

All in all, it's not a great situation.

“We deeply regret this incident happened,” Arne Sorenson, Marriott’s president and CEO said in a statement on Friday. “We are doing everything we can to support our guests. ... We are devoting the resources necessary to phase out Starwood systems and accelerate the ongoing security enhancements to our network.”

A Historic Breach

Breach response experts told WIRED on Friday that the sheer amount of time the attackers had inside the system―four years in all―likely made the breach much worse than it otherwise might have been. Time gives attackers the ability to chip away at defenses, or simply learn more about a system to understand where the valuable data is. Even with encrypted data, like the credit card numbers in this case, an attacker with enough access could steal the decryption keys, or swipe sensitive data before it ever has a chance to be encrypted in the first place. Either scenario seems possible, given the details Marriott has released so far.

“It’s all about key management and doing encryption in the places where an attacker might be,” says Johns Hopkins cryptographer Matthew Green. “There's no point in locking the gates if the bad guy is already inside."

Meanwhile, the attackers also had ample time to encrypt the stolen data as part of their exfiltration strategy. Hackers often use encryption as a tool to mask data and sneak it past a network's "data loss prevention" defenses, which monitor for sensitive data in transit.

Marriott says a digital security tool flagged suspicious attempted access to its United States Starwood guest reservation database on September 8 of this year. The company investigated, and seems to have blocked attacker access by September 10, because it says that no customer data was stolen after that date. But Marriott also says its initial investigation didn't definitively identify the scope of the problem until more than two months later, on November 19.

Marriott says its own digital systems were not affected, only the Starwood side. Some penetration testers and network breach responders speculated to WIRED on Friday that Marriott's acquisition of Starwood may have played a role in delaying detection if the companies were distracted by the larger topic of brokering the deal.

"It's not clear whether the attacker already had access through Starwood before the merger, or whether Marriott had a copy of the database for evaluation purposes and due diligence and lost control of it there," says Jake Williams, founder of the penetration testing and incident response firm Rendition Infosec. "I can't believe that the merger wasn't a contributing factor in the breach."

What You Can Do

Beginning Friday, Marriott is rolling out batches of notification emails to impacted customers. It has also established a call center and breach notification website , you can’t use it to look up whether your information was stolen, or how much of it. Marriott seems to be erring on the side of assuming that every Starwoods customer has been impacted. "If you made a reservation on or before September 10, 2018 at a Starwood property, information you provided may have been involved,” the company’s breach response page reads.

"They'll undoubtedly find a way to maliciously use every piece of data they collect."

Crane Hassold, Agari

The company is also offering enrollment in the identity monitoring service WebWatcher for one year to anyone who thinks they were impacted by the four-year network intrusion. You can sign up now . The service alerts you if your information crops up online, including on the dark web. Enrollment also includes a reimbursement benefit for expenses related to fraud and identify theft, and unlimited consultation with identity theft specialists at the corporate incident response firm Kroll. The services are available to people in the US, Canada, and United Kingdom.

If you've stayed at an SPG hotel in the last few years, the standard advice applies: Enroll in the free monitoring, change your SPG password―and on any other account where you might have reused it―and watch your finances for suspicious activity.

The Marriott breach does have a slightly less common, though not unheard of, component of exposing hundreds of millions of passport numbers. These can be used to make counterfeit passports, a classic black market industry . But they can also be combined with other personal details about someone, like the data points stolen in the Marriott breach, to bolster traditional online fraud and abuse. And passport numbers lend an air of legitimacy to other information like name, address, date of birth, and email, potentially allowing scammers to open bank or credit card accounts in victims' names.

Crane Hassold, senior director of threat research at the phishing defense firm Agari, points out that passport numbers can also be used to track someone's movements. For example, US Customs and Border Protection offers a public database for tracking your travel history. Someone with your information, particularly your passport number, can run the queries, too. US citizens can renew their passports at any time to receive a new passport number, applying by mail or in person at an approved State Department facility. If you are years away from a passport's expiration, you may need to include a letter with the application about your reason for renewing early.

"The more information a scammer can collect on an individual the better for them," Hassold says. "They'll undoubtedly find a way to maliciously use every piece of data they collect."

Marriott clearly learned from past corporate breach disclosure gaffes in responding to this incident with resources and information for victims. But it's difficult to simply call it an "incident" when the attack played out over four years. Marriott spokesperson Connie Kim told WIRED that the company's investigation is ongoing, and it doesn't have definite answers yet about how the attackers initially got onto the Starwood network, or how the activity went undetected for for so long.

"They are still investigating this heavily and don’t know to what extent attackers had access―this could turn out to be much, much larger," says David Kennedy, CEO of the penetration testing and incident response consultancy TrustedSec. "Four years is an eternity when it comes to breaches. If attackers had access for that long I would assume they had access to virtually everything." He added, laughing, "I know I would."

More Great WIRED Stories The climate apocalypse is now, and it’s happening to you Russian hackers are still probingthe US power grid SpaceX is launching apiece of art into orbit Cheap and easy STD treatment is over.What went wrong? PHOTOS: Travel a world createdby a copy machine Get even more of our inside scoops with our weekly Backchannel newsletter

Number of births in the twentieth century by @ellis2013nz

$
0
0
Motivation

A couple of weeks back, Branko Milanovic asked on Twitter :

“Does anyone know a link to a calculation on how many people were born … in the entire 20th century?”

Somewhat surprisingly, no-one did. However, there was a calculation by the Population Research Bureau that about 108 billion people had walked the earth since 50,000 years ago. I gave myself the detective job of tracking done the data to make the estimates for just the twentieth century.

The necessary data is crude birth rates and total population numbers. Back to about 1950 there is excellent data available from the UN, but before that it is surprisingly hard to find from official sources.

Kremer’s long range global population estimates

The first useful thing I found was an interesting article in The Quarterly Jorunal of Economics by Michael Kremer on Population Growth and Technological Change: One Million B.C. to 1990

That got me this dramatic-looking set of numbers:

The vertical axes in those charts is on a logarithmic scale, so the growth rates in population there are truly astonishing. Humanity has exploded on this planet in a very short period of time, in the scale of things.

Here are just the growth rates:

Some interesting things here include:

the dip into negative territory in the thirteenth century, with Mongol wars and the Black Death; another bad time in the early seventeenth century with the 30 years war destroying Germany, and the collapse of the Ming dynasty in China; a dip around 1850 which I think is probably associated with the Taiping Rebellion in China. Despite its massive scale we saw a decline in growth but nowhere near down to negative territory; a final dip associated with World War I and subsequent influenza pandemic (years between 1900 and 1920); maximum growth rate in 1960 with rapid decline since.

There would be better sources now of the recent years, showing the steep decline continuing. For the record, the world population on current trends in declining growth rates will reach a maximum level of around 11 billion in 2100 .

I typed out Kremer’s numbers rather than muck around trying to read them electronically. Note that there seems to be an error in the third growth rate in his Table I on page 683; while all his other growth rates exactly match my calculations based on his population levels, for that particular number he has 0.000031 and I get 0.00012. I don’t think it matters much for the substance of his interesting arguments in that paper.

Here’s the R code to create those charts with Kremer’s numbers:

library(tidyverse) library(viridis) library(gridExtra) library(ggrepel) library(readxl) library(testthat) set.seed(123) #-----------------total population size--------------- kremer <- data_frame( year = c(-1000000, -300000, -25000, -10000, -(5:1)*1000, -500, -200, 1, 200, 400, 600, 800, 10:16 * 100, 1650, 1700, 1750, 1800, 1850, 1875, 1900, 192:199 * 10), pop = c(0.125,1,3.34,4,5,7,14,27,50,100,150,170,190,190,200,220, 265,320,360,360,350,425,545,545,610,720,900,1200,1325, 1625,1813,1987,2213,2516,3019,3693,4450,5333) * 1000000 ) %>% mutate(growth = lead((pop / lag(pop)) ^ (1 / (year - lag(year))) - 1)) # note there's an error in Kremer's 3rd growth rate: 0.000031 should be 0.000012 # Global populations p1 <- kremer %>% ggplot(aes(x = year, y = pop)) + geom_line() + geom_point() + scale_y_log10(label = comma, limits = c(1e5, 5e9)) + scale_x_continuous("Year") + labs(caption = " ") p2 <- p1 %+% filter(kremer, year > -2000) grid.arrange(p1 + labs(y = "Global human population (logarithmic scale)") + ggtitle("World population", "1 million BC to 1999"), p2 + labs(y = "") + ggtitle("", "1,000 BC to 1999") + labs(caption = "Source: Kremer, 1993, 'Population Growth and Technological Change: One Million B.C. to 1990"), ncol = 2) # Global growth rates kremer %>% filter(year > -2000) %>% ggplot(aes(x = year, y = growth)) + geom_path() + geom_point() + geom_text_repel(aes(label = year), colour = "steelblue") + scale_y_continuous("Annual growth rate", label = percent) + labs(caption = "Source: Kremer, 1993", x = "Year") + ggtitle("World population growth rates", "1000 BC to present")

Gapminder’s country-level birth rate and population estimates

That was all very interesting and gave me at least some benchmark population values covering the whole twentieth century (not just the post-WWII period covered in the official sources), but I also need crude birth rates how many people born per 1,000 people living. This is the only practical way of getting estimates of the number of births; just growth rates alone won’t do it because the same growth rate could mean quite different birth rates, depending on death rates.

Eventually I realised that Gapminder publish estimates of many variables, including basic demographic data, back to 1800 at the country level. Gapminder Foundation is the Swedish NGO founded by the recently deceased and much missed Hans Rosling and friends and family, promoting increased understanding of development issues in a historical context. It’s possible to combine these to get global estimates by summing up population numbers, and creating population-weighted averages for birth rates.

This process gives me this plausible-looking set of estimates of the world’s crude birth rate over the last 200 years or so:

These numbers are close enough (for my purposes) to the UN’s figures for the overlapping period from 1950 onwards.

I also quite like this representation of the population and the birth rate together as a connected scatter plot:

Obviously, once we have crude birth rates and population numbers, we just need to multiply them together to get an estimated number of births per year:

What’s with that jump in the 1980s? Well, the decline in crude birth rate stalled at around 27 or 28 for 15 years or so, and with the massive increase in population coming from rising living standards post-WWII that was enough for a major increase in number of babies being born. By the 1990s, crude birth rates resumed a precipitous decline. There also might be a story here about demographic collapse in post-Soviet Union countries; or there might be a quirk in the data arising from how I aggregated up the country level data.

Here’s the code to get the data from Gapminder and draw those charts:

#---------------birth rate--------------- # harder # See https://ourworldindata.org/fertility-rate # nothing before 1950 # The gapminder R package only has data for every 5 years, from 1952 # but the gapminder website has the full data # crude birth rate per country per year if(!file.exists("cbr.xlsx")){ download.file("https://docs.google.com/spreadsheet/pub?key=tUSeGJOQhafugwUvHvY-wLA&output=xlsx", destfile = "cbr.xlsx", mode = "wb") } cbr_orig <- read_excel("cbr.xlsx") names(cbr_orig)[1] <- "country" cbr <- cbr_orig %>% gather(year, birth_rate, -country) %>% mutate(year = as.integer(year)) # population per country per year if(!file.exists("pop.xlsx")){ download.file("https://docs.google.com/spreadsheet/pub?key=phAwcNAVuyj0XOoBL_n5tAQ&output=xlsx", destfile = "pop.xlsx", mode = "wb") } pop_orig <- read_excel("pop.xlsx") names(pop_orig)[1] <- "country" pop <- pop_orig %>% gather(year, total_population, -country) %>% mutate(year = as.integer(year)) combined <- cbr %>% full_join(pop, by = c("year", "country")) %>% arrange(country, year) %>% mutate(births = total_population * birth_rate / 1000) %>% filter(!is.na(total_population)) ave_br <- combined %>% filter(!is.na(birth_rate)) %>% group_by(year) %>% summarise(birth_rate = sum(birth_rate * total_population) / sum(total_population), total_population = sum(total_population)) %>% mutate(year_lab = ifelse(year %% 50 == 0, year, ""), people_born = birth_rate * total_population / 1000) set.seed(123) ave_br %>% ggplot(aes(x = year, y = birth_rate)) + geom_line() + geom_point() + geom_text_repel(aes(label = year_lab), colour = "steelblue") + labs(caption = "Source: Estimated from Gapminder country level data for crude birth rate and population", x = "Year", y = "Estimated global crude birth rate per 1,000 population") + ggtitle("Estimated global crude birth rate", "1800 to present") ave_br %>% ggplot(aes(x = total_population / 1e6, y = birth_rate, label = year_lab)) + geom_path() + geom_text_repel(colour = "steelblue") + scale_x_continuous("Estimated global population", label = comma_format(suffix = "m")) + labs(y = "Estimated global crude birth rate per 1,000 population", caption = "Source: Estimated from Gapminder country level data for crude birth rate and population") + ggtitle("Estimated global crude birth rate", "1800 to present") ave_br %>% ggplot(aes(x = year, y = people_born / 1e6)) + geom_line() + scale_y_continuous("People born per year", label = comma_format(suffix = "m")) + labs(caption = "Source: Estimated from Gapminder country level data for crude birth rate and population", x = "Year") + ggtitle("Estimated global births", "1800 to present") Checking against a more definitive source for more recent years

I was a bit worried about that bump in the 1980s, so I thought I should have a look at a more definitive data source for recent decades rather than relying on my population-weighted average of Gapminder’s country level data. I grabbed the World crude birth rate from the World Development Indicators for 1960 onwards:

Unsurprisingly it’s a bit flaky in the early years, but I decided the two were close enough that I could stick to using my Gapminder estimates. Here’s how I got that World Bank data:

#------------------better source for more recent years---------- library(WDI) wdi_cbr <- WDI(country = "1W", indicator = "SP.DYN.CBRT.IN", start = 1950, end = 2020) CairoSVG("..http://freerangestats.info/img/0141-compare.svg", 7, 6) wdi_cbr %>% select(year, SP.DYN.CBRT.IN) %>% rename(wdi = SP.DYN.CBRT.IN) %>% left_join(ave_br, by = "year") %>% mutate(year_lab = ifelse(year %% 5 == 0, year, "")) %>% ggplot(aes(x = birth_rate, y = wdi, label = year_lab)) + geom_abline(slope = 1, intercept = 0, colour = "orange") + geom_path() + geom_text_repel(colour = "steelblue") + labs(x = "Population-weighted average of Gapminder country data", y = "World Bank's World Development Indicators") + coord_equal() + ggtitle("Comparing two sources on global birth rates")

Cumulative births

Finally, the answer to the question, which turns out to be about 9.75 billion:

The Gapminder data doesn’t have values for every year, but it’s straightforward to interpolate them and get the estimated number of births. That was the final bit of calculating to do.

I saved the birth numbers from 1800 onwards as a CSV in case anyone is interested in them.

Here’s the code for the final step:

#-----------cumulative births------------- twentieth_c <- data_frame( year = 1901:2000, births = approx(ave_br$year, ave_br$people_born, xout = 1901:2000)$y ) %>% mutate(cum_births = cumsum(births)) twentieth_c %>% ggplot(aes(x = year, y = cum_births / 1e6)) + geom_line() + scale_y_continuous("Cumulative births in the twentieth century", label = comma_format(suffix = "m")) + labs(x = "Year", caption = "Source: Estimates based on Gapminder country level data for crude birth rate and population") + ggtitle("How many people born in the twentieth century?", paste("An estimated", format(round(sum(twentieth_c$births / 1e6)), big.mark = ","), "million")) # Write a version from 1800 onwards in case people want it: published_data <- data_frame(year = min(ave_br$year):max(ave_br$year)) %>% mutate( births = round(approx(ave_br$year, ave_br$people_born, xout = year)$y)) %>% mutate(cum_births = cumsum(births))

Viewing all 12749 articles
Browse latest View live